FPF Submits Comments to Inform New York Children’s Privacy Rulemaking Processes
At the end of the 2024 legislative session, New York State passed a pair of bills aimed at creating heightened protections for children and teens online. One, the New York Child Data Protection Act (NYCDPA), applies to a broad range of online services that are “primarily directed to children.” The NYCDPA creates novel substantive data minimization requirements, restricts the sale of data of children (defined as under 18), and requires businesses to respect “age flags” – new device signals intended to convey whether a user is a minor. The second law, the Stop Addictive Feeds Exploitation (SAFE) for Kids Act, is more narrowly focused on social media platforms and restricts minors’ access to the presentation of content made by “addictive feeds.”
On September 30, the Future of Privacy Forum (FPF) filed comments with the New York Office of the Attorney General (OAG) to inform forthcoming rulemaking for the implementation of these two frameworks. While raising the protections for youth privacy and online safety has been a priority area for lawmakers over the last several years, New York’s two new laws both take unique approaches. FPF’s comments seek to ensure that New York’s protections for youth online can protect and empower minors while supporting interoperability with existing state and federal privacy frameworks.
New York Child Data Protection Act
The NYCDPA creates new restrictions on the collection and procession of the personal data of teen users who are outside the scope of the federal Children’s Online Privacy Protection Act (COPPA). Under the law, a covered business must obtain “informed consent” to use teen data or the processing must be strictly necessary to meet one of nine enumerated “permissible purposes” such as conducting internal business operations or preventing cybersecurity threats. The requirement that, in the absence of informed consent, processing must be strictly necessary for minors 13+ could be stricter than COPPA standards, especially with respect to many digital advertising practices. The law also restricts the sale of minors’ personal data, including allowing a third party to sell the data.
Obtaining “informed consent” under the NYCDPA requires satisfying a number of conditions, some of which diverge from comparable privacy regimes. Consent must be made separately from any other transaction or part of a transaction; be made in the absence of any ‘dark patterns;’ clearly and conspicuously state that the processing for which consent is requested is not strictly necessary and that the minor may decline without preventing continued use of the service; and clearly present an option to refuse to provide consent as the most prominent option.
The NYCDPA is also unique in providing for the use of device signals to transmit legally binding information about a user’s age and their informed consent choices. Such technologies are not commonplace in the market and raise a number of both technical and policy questions and challenges.
With these unique provisions in mind, FPF’s comments recommend that the OAG:
1. Consider existing sources of law, including the COPPA Rule’s internal operations exception, state privacy laws, and the GDPR to provide guidance on the scope of “permissible processing” activities;
2. Where appropriate, align core privacy concepts with the developing state comprehensive privacy landscape including the definition of “personal information” and opportunities for data sharing for research;
3. Consult with the New York State Education Department to ensure alignment with New York’s existing student privacy laws and implementing regulations to avoid disruption to both schools and students access to and use of educational products and services;
4. Mitigate privacy, technical, and practical implementation concerns with “age flags” by further consulting with stakeholders and establishing baseline criteria for qualifying signals. FPF offers technical and policy considerations the OAG should consider in furthering this emerging technology; and
5. Explicitly distinguish informed consent device signals from “age flags”, given that providing consent at scale raises a separate set of challenges and may undermine the integrity of the NYCDPA’s opt-in consent framework.
The SAFE for Kids Act restricts social media platforms from offering “addictive feeds” unless the service has conducted “commercially reasonable” age verification to determine that a user is over 17 years of age, or the service has obtained verifiable parental consent (VPC). The legislative intent makes clear that ordering content in a chronological list would not be considered an “addictive feed.” A social media platform will also need to obtain VPC to provide notifications concerning an addictive feed to minors between the hours of midnight and 6 am.
“Addictive feeds” are broadly defined as a service in which user-generated content is recommended, selected, or prioritized in whole or in part on user data. There are six carve-outs to the definition of “addictive feed” such as displaying or prioritizing content that was specifically and unambiguously requested by the user or displaying content in response to a specific search inquiry from a user.
Notably, the SAFE for Kids Act focuses on parent consent for teens to receive “addictive feeds.” In contrast, teens are empowered by the Child Data Protection Act to provide informed consent for a broad range of activities. The divergence in policy approaches between these two laws regarding who can provide consent for a teen using a service may lead to challenges in understanding individual rights and protections.
Given the critical role of age verification and parental consent within the SAFE for Kids Act, FPF’s comments to the OAG focus on highlighting considerations, risks, and benefits of various methods for conducting age assurance and parental consent. In particular we note that:
1. There are three primary categories of age assurance in the United States: age declaration, age estimation, and age verification. Each method has its own challenges and risks that should be carefully balanced across the state interest in protecting minors online, the state of current technologies, and end-user realities when developing age verification standards.
2. When exploring appropriate methods for providing verifiable parental consent, the OAG should consider the known problems, concerns, and friction points that already exist with the existing verifiable parental consent framework under COPPA.
3. Strong data minimization, use limitations, and data retention standards could enhance data protection and user trust in age assurance and VPC requirements.
On June 13, 2024, Governor Phil Scott vetoed H. 121. This marked the first governor veto of a comprehensive privacy bill passed by the state legislature.
Immediately prior to the close of the state legislative session on May 10, 2024, the Vermont legislature passed H. 121, “An act relating to enhancing consumer privacy and the age-appropriate design code.” If enacted by Governor Scott, Vermont could become the state with the farthest-reaching comprehensive privacy law. While the Vermont Data Privacy Act (VDPA) is modeled after the popular Connecticut privacy framework, it goes further in many places, drawing inspiration from a variety of sources. Vermont adds data minimization provisions inspired by Maryland’s new privacy law, new digital civil rights protections pulled from the American Data Privacy and Protection Act, a trimmed-down Age-Appropriate Design Code (AADC) focused on design features, and an entirely novel private right of action.
Applicability
At over 100 pages, determining whether and how an organization will be covered by the H. 121 is a more complicated question than under most state privacy laws. The VDPA contains unique scoping provisions and tiered effective dates tied to an organization’s size and the types of data it processes, and the AADC scope is entirely distinct from the rest of the VDPA.
1. Tiered effective dates
The Vermont Data Privacy Act establishes a tiered timeline for applicability. For larger organizations that process data of 25,000+ Vermont consumers or process data for 12,500 consumers and get more than 25% of their revenue from selling personal data, the law will go into force on July 1, 2025. Come July 1, 2027, the law will apply to organizations that either process data of 6,250+ consumers, or process data of 3,125 consumers and get more than 20% of their revenue from selling personal data. Despite Vermont’s small population, proportionally speaking these are the lowest coverage applicability thresholds across all comprehensive state privacy laws.
2. No revenue and data processing thresholds for health data and kids data
The VDPA contains heightened protections for minors’ data and provisions concerning consumer health data that are not tied to the above revenue and data processing thresholds. As a result, small businesses could potentially have obligations under these provisions. Vermont joins an emerging trend originating in Connecticut of making certain protections for the most sensitive categories of personal information generally applicable, rather than being subject to a small business exception.
3. Separate applicability for Age-Appropriate Design Code
The standalone AADC section also contains a unique applicability threshold. Rather than apply to controllers, the AADC section applies to “covered businesses” that collect consumers’ personal data, determine the purposes and means of processing that data, and, alone or in combination, buy, receive for commercial purposes, sell, or share the personal data of at least 50% of their consumers. Given that this section specifies businesses and the revenue threshold is 50%, it will likely apply to a smaller subset of organizations than those covered under the VDPA. The ultimate scope of this provision is likely to be substantially shaped by how the term “receive for commercial purposes” is interpreted.
Notable protections for Vermonters
Much ink has been spilled over the “state privacy patchwork,” but the Vermont law itself is a bit of a patchwork, given that it draws inspiration from multiple sources, such as Connecticut, Maryland, and the American Data Privacy and Protection Act. Many rights given to individuals may be familiar, such as accessing, correcting, and deleting personal information. However, Vermont’s patchwork bill creates notable differences, including data minimization, prohibitions on selling sensitive data, and prohibitions on discriminatory processing.
1. Data minimization
The VDPA places default limits on the collection of personal data to what is reasonably necessary and proportionate to provide or maintain a specific product or service requested by the individual. This limit matches Maryland – however, Vermont lacks Maryland’s requirement that the processing of sensitive data must be strictly necessary, making Vermont somewhat less restrictive. Vermont further limits any processing for a purpose not disclosed in a privacy notice unless an individual’s consent is obtained or the purpose is reasonably necessary to and compatible with a disclosed purpose.
2. Prohibitions on selling sensitive data
Similar to Maryland, the VDPA prohibits the sale of sensitive data. Under the VDPA, sensitive data includes, among other things, consumer health data, biometric or genetic data, and personal data collected from a known minor. While the privacy protections for minors’ data and consumer health data largely follow Connecticut’s, Vermont goes further by not allowing the sale of sensitive data even with consent. Vermont may go further than even Maryland because it defines “sale” more broadly than any state privacy law to date, including the exchange of personal information not just for monetary value or other valuable consideration, but for a commercial purpose.
3. Prohibitions on discriminatory processing
Vermont prohibits processing an individual’s personal data in violation of State or federal laws that prohibit unlawful discrimination or in a manner that discriminates against individuals or otherwise restricts the enjoyment of goods and services based on protected classes. There are limited exceptions for self-testing and diversity applicant pools. These civil rights protections, derived from the American Data Privacy and Protection Act (ADPPA) and the American Privacy Rights Act discussion draft, go further than existing state privacy laws because the prohibition is not strictly tried to discrimination that is already unlawful. One minor difference from ADPPA is that Vermont prohibits discrimination against individuals, rather than “in a manner that discriminates,” though this distinction may not have a practical impact.
4. Broad Right to Opt out of Targeted Advertising
Like the Connecticut framework, the VDPA allows for the option to opt out of targeted advertising. However, the VDPA broadens the definition of targeted advertising to include first-party data shared between distinctly branded websites, including websites operated by the same controller. This expanded definition goes much further than any existing state privacy law.
A limited private right of action
To date, the only comprehensive state privacy law with any private right of action is California, which narrowly provides that certain data breaches can be the basis for a cause of action. Otherwise, comprehensive privacy laws are solely enforced by government regulators such as State Attorneys General. Vermont would break this mold by allowing individuals to bring suit against “large data holders” and data brokers in instances where there were alleged violations involving sensitive data or confidentiality of consumer health data. Vermont defines large data holders as organizations that process the data of more than 100,000 Vermont residents. This is noteworthy because as of the 2020 census, the Vermont population is 643,000. By limiting the private right of action to specific types of entities and particular kinds of privacy violations, the private right of action reflects a compromise between lawmakers in the House who wanted a broad private right of action and lawmakers in the Senate who struck it entirely in an earlier draft.
In a further act of compromise, Vermont legislators took a creative approach to the timeframe for bringing any lawsuits. The private right of action goes into effect January 1, 2027, which is 18 months after when the largest organizations will have come into compliance with the law. The private right of action will sunset after two years unless the Vermont legislature passes a new law to affirmatively reauthorize it. Separately, the Attorney General is charged with conducting a study and developing recommendations to the legislature on implementing a private right of action, including applicability thresholds to ensure that a private right of action does not harm good-faith actors or small businesses and damages that balance the consumer interest in enforcing rights against any incentives provided to litigants with frivolous claims. The report is due by January 15, 2026, a year before the private right of action takes effect and as the legislature begins its next session.
Heightened protections for minors, including two duties of care
Because Vermont draws from Connecticut’s framework, the VDPA includes heightened protections for children and teens that largely mirror Connecticut. These protections include a duty to avoid a “heightened risk of harm” to minors, restrictions on selling minors’ data, and additional risk assessment requirements for controllers who process minors’ data. One subtle but significant difference is that Vermont adds additional harm to be considered in the duty of care and data protection impact assessments. Covered organizations will need to consider any “unintended disclosure of personal data of a minor.” Interestingly, this is language that was considered in Colorado this legislative session, but was ultimately rejected in favor of “unauthorized disclosure of the personal data of minors as a result of a security breach.” The harm articulated in Vermont is much broader and could cover inadvertent disclosures, not just disclosures due to a security breach.
However, the protections focused on children and teens do not end there. During the 2024 session, Vermont lawmakers pursued parallel efforts to protect children online: H. 121, a comprehensive privacy bill, which passed, and S. 289, an AADC. A slimmed-down version of S. 289 was appended to H.121, resulting in the passage of both. The Vermont Data Privacy Act provisions address minors’ data protection, while the AADC addresses safety and design features of online services for minors. A key example of this delineation is that while the VDPA restricts dark patterns specifically related to exercising data rights, the Vermont AADC bans all dark patterns. The AADC defines dark patterns broadly as any user interface that undermines user autonomy. Without attaching this restriction to data rights or any specifically identified harm, the prohibition can be interpreted quite broadly. Additionally, the AADC further prohibits “low-friction variable rewards” that “encourage excessive and compulsive use by a minor.” A low-friction variable reward is defined as “design features or virtual items that intermittently reward consumers for scrolling, tapping, opening, or continuing to engage” in a service, with examples including endless scroll and autoplay.
Another additional wrinkle of the attached AADC is that Vermont actually creates two duties of care for minors. In the comprehensive privacy section, companies have a duty to avoid heightened risk of harm to minors. The AADC separately requires an affirmative minimum duty of care owed to minors by a business that processes a minor’s data in any capacity.
Lastly, the AADC disclaims that age verification is not required to comply with the obligations of this section. This may be a proactive effort to avoid any litigation regarding the constitutionality of age verification mandates. However, the AADC instead clarifies that the obligations imposed should be done with age estimation techniques. Given how age estimation is defined, this would provide a novel question for a court to consider, should there be any litigation. However, it is worth noting that age estimation often includes additional data collection, so covered organizations will need to take care in reconciling these obligations with the data minimization provisions of the VDPA.
Next steps
H. 121 has not yet been presented to the Governor for consideration. Once received, the Governor will have merely five days to consider the bill. Given the novelty of several provisions of the bill, it may be cause for concern or may be an opportunity for Vermont to raise the bar across the nation. Should the Governor veto, the bill passed both chambers with the votes necessary to support a veto override. Organizations in scope and Vermonters should take note that the bill also calls for the Attorney General to lead a public education, outreach, and assistance program, which would begin to take effect July 1, 2024.
Little Users, Big Protections: Colorado and Virginia pass laws focused on kids privacy
‘Don’t call me kid, don’t call me baby’ – unless you are a child residing in either Colorado or Virginia, where children will soon have increased privacy protections due to recent advances in youth privacy legislation. Virginia and Colorado both have broad-based privacy laws already in effect. During the 2024 state legislative sessions, both states amended those laws to add specific online privacy protections for kids’ data. In Virginia, HB 707/SB 361 passed the state legislature. It moved on to Governor Youngkin’s desk on March 8th, and after some procedural hurdles, it finally passed into law on May 17 as a modest approach for additional youth-tailored protections. In Colorado, SB 41 passed the legislature on May 14th with near-unanimous votes in both chambers, introducing a more expansive youth privacy framework than Virginia. SB 41 is expected to be signed into law by Governor Polis as passed by the Colorado legislature. Following Connecticut’s lead last year, these developments signal a growing trend toward states building off of existing privacy frameworks to strengthen protections for children’s data online.
Colorado
Although Colorado SB 41 is more expansive than what Virginia passed, the requirements in this law are familiar. SB 41 is almost an exact copy of the youth privacy amendment to Connecticut’s comprehensive privacy bill SB 3, which we covered in a blog post in 2023 As a result, there is a general compliance model for the requirements of this bill. However, it is still worth noting that there are some differences between Colorado SB 41 and Connecticut SB 3 which should be given special attention, especially where the impact of these differences remains to be seen.
What’s familiar about SB 41?
The scope of SB 41 is nearly identical to SB 3.
As an amendment to a comprehensive state privacy law, SB 41 will work within the existing Colorado Privacy Act (“CPA”) to provide additional heightened protections for kids and teens up to 18. The compliance requirements of SB 41 rely on the existing definition of controller in the CPA. The obligations under both Colorado and Connecticut apply to controllers who offer any online service, product, or feature to consumers whom the controller has actual knowledge, or willfully disregards, are minors. Most importantly, the text of the bill makes clear that, while some child-focused provisions of Colorado and Connecticut’s laws only apply to controllers that meet specified revenue or user thresholds, the duty of care provisions apply to all controllers.
Both states create a duty of care owed to minors.
SB 41 creates a duty to use reasonable care to avoid any heightened risk of harm to minors and creates additional risk assessment requirements for minors’ data. This duty to use reasonable care applies where the controller has actual knowledge or willfully disregards that a user is under 18 years of age. If controllers comply with the bill’s risk assessment requirements, there is a rebuttable presumption in any enforcement action brought by the State Attorney General that a controller used the reasonable care required to avoid heightened risk of harm to minors. Therefore, a strong incentive exists for controllers to conduct risk assessments, since doing so could potentially save controllers from enforcement in cases of unforeseeable harm to minors as a result of their online service, product, or feature.
Both states have requirements that draw on the California AADC, with differences.
The substantive requirements under Colorado are nearly identical to those in Connecticut. Both SB 41 and SB 3 have restrictions on processing minors’ data similar to those originally seen in the enjoined California Age-Appropriate Design Code. For example, SB 41 limits controllers’ ability to profile, process geolocation data, or display targeted ads to a minor’s account without prior consent. However, unlike the California AADC, neither Colorado nor Connecticut requires a controller to estimate the age of users or assess harms related to content.
What’s different about SB 41?
An additional harm must be considered in Colorado.
SB 41 goes a step further than Connecticut SB 3 in the categories that must be included in data protection impact assessments (“DPIAs”) and introduces a fourth type of harm that must be considered – which is the ‘heightened risk of harm’ for any “unauthorized disclosure of the personal data of minors as a result of a security breach.” It is unclear at this time what the magnitude of this impact will be on controllers’ compliance efforts, but it does indicate a strong interest in the security of minor’s data collected through online services, products, and features. Along with the addition of this fourth kind of harm, SB 41 includes three of the same harms that are also seen in SB 3’s “heightened risk of harm to minors” definition: (1) unfair, deceptive treatment or unlawful disparate impact on minors, (2) any financial, physical, or reputational injury to minors, and (3) any physical or other intrusion on the seclusion, solitude, or privacy of minors that would be offensive to the reasonable person. Aside from the general duty of care to avoid these types of harm to minors, under both Connecticut and Colorado, controllers must assess for these harms in DPIAs.
No ‘unpublishing’ requirement.
SB 3 had a standalone section focused specifically on obligations for social media platforms. SB 41 lacks SB 3’s requirement that a controller ‘unpublish’ a minor’s social media account. All requirements in SB 41 apply generally to covered services.
Virginia
Compared to Colorado and Connecticut’s youth privacy amendments, Virginia passed a more modest set of requirements for controllers in the state. Despite this moderate approach, Virginia’s method of heightening child privacy protections online is still worth watching. The Governor’s proposed amendments, which the legislature ultimately rejected, would have been much more expansive, such as raising the age for needing parental consent up to 17. As indicated by the bill sponsors during floor hearings, the smaller step in what was passed is only a starting point for the state. Virginia lawmakers indicated an intent to continue building upon this foundation of privacy protections and raising the age threshold in the law, but first want to get something attainable “on the books… versus [being] stuck in court” with constitutional challenges.
Scope
Like Colorado SB 41, Virginia HB 707 would work within the state’s existing comprehensive privacy law, taking on the established controller definition. Unlike Colorado, small businesses are exempt from the Virginia Consumer Data Protection Act. HB 707 does not amend the scope or application threshold of the VCDPA to the child privacy provisions of the bill – the application of the child privacy provisions is the same as the application of the other privacy requirements in the VCDPA. The protections afforded under HB 707 apply to known children under 13.
Controller obligations
Unlike Colorado SB 41 and Connecticut SB 3, Virginia HB 707 does not create a duty of reasonable care. Instead, HB 707 simply limits the processing of minor data, establishes requirements for obtaining consent to process minor data, and expands DPIA requirements. The limits on processing and obtaining consent generally align with what is required by COPPA, though COPPA technically only applies to collecting rather than processing. While HB 707 creates marginally more specific DPIA requirements, existing requirements under the VCDPA already required conducting DPIAs for sensitive data, including children under 13. Additionally, like Colorado and Connecticut, Virginia HB 707 places default limits on collecting a child’s precise geolocation and requires a signal to the child while this geolocation information is collected.
Conclusion
Despite seeing some variation in the approach to enacting youth-focused amendments to comprehensive privacy laws, starting with Connecticut’s SB 3 in 2023, a trend is developing among state legislators to continue building upon pre-established privacy frameworks. It is worth acknowledging that under state privacy laws, children and teens are part of the definition of “consumers” these laws are scoped to protect. Any broad-based state privacy law will naturally apply to residents of that state, both young and old. However, conceptually, it may be easier for lawmakers to envision what additional protections children and teens need once a baseline privacy framework is in place.
Although this is a new and noteworthy privacy development to watch moving forward, it is not the only approach lawmakers are taking to regulate youth online experiences. Another avenue during the 2024 session was the new Age-Appropriate Design Code framework (“AADC 2.0”). While the AADC 2.0 passed in Maryland and Vermont this year, there are several differences between these two states, as well as some uncertainties about how the AADC 2.0 will hold up to Constitutional scrutiny. Compare this with Connecticut and Colorado, which have nearly identical frameworks for youth protections. Over the last few years, several laws intended to address child privacy and safety online have passed in different states. Still, many, such as the California Age-Appropriate Design Code, have had their implementation delayed by courts over Constitutional challenges. Given that SB 3 will not come into force until October 2024, it may be too soon to call Connecticut and Colorado’s amendments a pattern. Still, there is potential for lawmakers to converge around this approach to protecting children online where it faces a lower risk of legal hurdles than alternative approaches.
Youth Privacy in Immersive Technologies: Regulatory Guidance, Lessons Learned, and Remaining Uncertainties
As young people adopt immersive technologies like extended reality (XR) and virtual world applications, companies are expanding their presence in digital spaces, launching brand experiences, advertisements, and digital products. While virtual worlds may in some ways resemble traditional social media and gaming experiences, they may also collect more data and raise potential manipulation risks, particularly for vulnerable and impressionable young people.
This policy brief analyzes recent regulatory and self-regulatory actions and guidance related to youth privacy, safety, and advertising in immersive spaces, pulling out key lessons for organizations building experiences in virtual worlds.
Recent FTC Enforcement Actions and Guidance
The Federal Trade Commission (FTC) has shown a strong interest in using its consumer protection authority to bring enforcement actions against a wide range of digital companies for alleged “unfair and deceptive” practices, rule violations, and other unlawful conduct. The Commission has also issued several policy statements and guidance documents relevant to organizations building immersive technologies, touching on issues such as biometric data and advertising to children. It is clear the agency is thinking seriously about how its authority could apply in emerging sectors like AI, and organizations working on immersive technologies should take heed. Lessons from recent FTC privacy cases and guidance include:
The FTC interprets the Children’s Online Privacy Protection Act (COPPA)’s definition of “personal information” broadly, including data types that immersive technologies commonly collect, like eye tracking.
Immersive application providers must comply with COPPA if their application is “directed to children” or if there is “actual knowledge” children are accessing it.
Organizations should provide privacy policies and notices in a format appropriate for and consistent with the design elements of immersive experiences.
Organizations should take additional steps to be transparent about advertising practices.
Self-Regulatory Cases and Safe Harbor Guidance
Self-regulatory bodies also have an essential role in ensuring privacy and safety in child-directed applications and providing guidance to companies operating in the space. For example, organizations designated as COPPA Safe Harbors can guide companies toward compliant, developmentally appropriate, and privacy-protecting practices. Lessons from recent self-regulatory cases and Safe Harbor guidance include:
Advertising disclosures in immersive environments should be designed to be as clear and conspicuous as possible and provided in an age-appropriate manner.
Platforms that allow advertisements to children should ensure that developers, brands, and content creators have the necessary tools and guidance to clearly and conspicuously disclose the presence of advertising to children.
Privacy by design and by default demonstrate to regulatory and self-regulatory bodies that an organization takes youth privacy seriously.
Privacy and advertising practices for teens should take into account the unique considerations relevant to teen privacy and safety, compared to child and adult guidance.
Organizations with a robust privacy culture that demonstrate good faith efforts to follow the law are more likely to be given the benefit of the doubt.
Remaining Areas of Uncertainty
Because immersive technologies are relatively new and evolve rapidly, much of the existing regulatory and self-regulatory guidance is pulled from other contexts. Therefore, questions remain about how regulations apply in immersive environments and how to operationalize best practices. These questions include:
How age-appropriate design principles will best fit into an immersive technology context, such as how best to ensure strong default privacy settings for underage users; the best methods for clarity and transparency regarding data practices notices and advertising disclosures; and whether an immersive experience should require unique, additional safeguards.
What novel data collection and analysis methods in the immersive technology space will require discerning data practices surrounding its safeguarding and use, such as what kinds of inferences are appropriate to make from body-based data or to what extent avatars not derived from a child’s data are considered personal information.
How immersive technologyimpacts children and teens; more research is needed to understand whether certain kinds of experiences and privacy practices are harmful for children and teens, if there are unique risks to children’s privacy and mental health, and how organizations, parents, schools, and other stakeholders can address potential issues.
FPF Files COPPA Comments with the Federal Trade Commission
Today, the Future of Privacy Forum (FPF) filed comments with the Federal Trade Commission (Commission) in response to its request for comment on the Children’s Online Privacy Protection Act (COPPA) proposed rule.
As technology evolves, so must the regulations designed to protect children online, and FPF commends the Commission’s efforts to strengthen COPPA. In our comments, we outlined a number of recommendations and considerations that seek to further refine and update the proposed rule, from how it would interact with multiple provisions of a key student privacy law to the potential implications of a proposed secondary verifiable parental consent requirement.
To amplify the questions about how COPPA would interact with the Family Educational Rights and Privacy Act (FERPA), FPF was also one of 12 signatories to a multistakeholder letter addressed to the Commission and Department of Education urging the development of joint guidance.
Children today are increasingly reliant on online services to connect with peers, seek out entertainment, or engage in educational activities, and while there is a great benefit to this, there are also risks to privacy and personal data protection, and we applaud the Commission for its ongoing efforts to find a balance between these tradeoffs. Our comments and recommendations focused on areas where we believe there is further opportunity to strike that balance, including:
Revising definitions in line with how technology has evolved since the last COPPA Rule update, including adding “mobile telephone number” to the definition of online contact information, and clarifying what role text messages can play in the consent process.
Providing more specificity of what types of processes that encourage or prompt the use of a website are of greatest concern to the FTC, as language in the proposed rule may inadvertently limit positive use cases of prompts and notifications such as homework reminders, meditation apps, and notifications about language lessons.
Aligning the proposed security program language with the stated goal in the Notice of Proposed Rulemaking (NPRM), which reads that operators need “a written comprehensive security program” (emphasis added) and not a “child-specific” program, which would place an additional burden on companies with no additional benefit to parents or children.
Unique Considerations for Schools and Educational Technology
FPF commends the Commission’s effort to provide better clarity regarding how the rule should be applied in a school context; however, there are several areas where the proposed rule does not fully align with the Family Educational Rights and Privacy Act (FERPA), the primary federal law that governs use and disclosure of educational information. Both laws are complex, and the potential impact of confusion and misalignment is significant for the more than 13,000 school districts across the country and for the edtech vendor community.
With that in mind, our comments related to the proposed rule’s implications for student privacy focused in large part on identifying areas where more alignment and clarity around the interaction between COPPA and FERPA would be particularly instructive for both schools and edtech companies. Our recommendations include:
Working with the US Department of Education to create and maintain joint guidance, which would detail how operators and schools should interpret their obligations in light of the interaction between COPPA and FERPA. We also recommend that this guidance consider the perspective and expertise of Operators and School stakeholders.
Aligning the school-authorized education purpose exception to prior parental consent to the requirements of FERPA. We highlight several key areas where the rule needs clearer alignment, including how the definition of school-authorized education purpose aligns with FERPA’s School Official exception, how the use of the term written agreement in the proposed rule differs from how the term is used in FERPA, and how both laws address redisclosures of student data.
To download the joint letter to the FTC and U.S. Department of Education signed by FPF and 11 others, click here.
FPF and The Dialogue Release Collaboration on a Catalog of Measures for “Verifiably safe” Processing of Children’s Personal Data under India’s DPDPA 2023
When India’s DPDPA passed in August, it created heightened protections for the processing of personal data of children up to 18. When the law goes into effect, entities who determine the purpose and means of processing data, known as “data fiduciaries,” will need to apply these heightened protections to children’s data. Under the DPDPA, there is no further distinguishing between age groups of children, and all protections, such as obtaining parental consent before processing a child’s data, will apply to all children up to 18. However, the DPDPA stipulates that if the processing of personal data of children is done “in a manner that is verifiably safe,” the Indian government has the competence to lower the age above which data fiduciaries may be exempt from certain obligations.
In partnership with The Dialogue, an emerging research and public-policy think-tank based in New Delhi with a vision to drive a progressive narrative in India’s policy discourse, FPF prepared a Brief compiling a catalog of measures that may be deemed “verifiably safe” when processing children’s personal data. The Brief was informed by best practices and accepted approaches from key jurisdictions with experience in implementing data protection legal obligations geared towards children. Not all of these measures may immediately apply to all industry stakeholders.
While the concept of “verifiably safe” processing of children’s personal data is unique to the DPDPA and not found in other data protection regimes, the Brief’s catalog of measures can aid practitioners and policymakers across the globe.
The Brief outlines the following measures that can amount to “verifiably safe” processing of personal data of children, proposing additional context and actionable criteria for each item:
1. Ensure enhanced transparency and digital literacy for children.
2. Ensure enhanced transparency and digital literacy for parents and lawful guardians of very young users.
3. Opt for informative push notifications and provide tools for children concerning privacy settings and reporting mechanisms.
4. Provide parents or lawful guardians with tools to view, and in some cases set, children’s privacy settings and exercise privacy rights.
5. Set account settings as “privacy friendly” by default.
6. Limit advertising to children.
7. Maintain the functionality of a service at all times, considering the best interests of children.
8. Adopt policies to limit the collection and sharing of children’s data.
9. Consider all risks of processing their personal data for children and their best interests via thorough assessments.
10. Ensure the accuracy of the personal data of children held.
11. Use and retain personal data of children considering their best interests.
12. Adopt policies regarding how children’s data may be safely shared.
13. Give children options in an objective and neutral way, avoiding deceptive language or design.
14. Put in place robust internal policies and procedures for processing personal data of children and prioritize staff training.
15. Enhance accountability for data breaches through notifying the parents or lawful guardians and adopting internal policies such as Voluntary Undertaking if a data breach occurs.
16. Conduct specific due diligence with regard to children’s personal data when engaging processors.
We encourage further conversation between government, industry, privacy experts, and representatives of children, parents, and lawful guardians to identify which practices and measures may suit specific types of services and industries, or specific categories of data fiduciaries.
New FPF Infographic Analyzes Age Assurance Technology & Privacy Tradeoffs
As a growing number of federal and state children’s online privacy and safety proposals seek to age-restrict social media and other online experiences, FPF released a new infographic, Unpacking Age Assurance: Technologies and Tradeoffs. The infographic analyzes the risks and potential harms associated with attempting to discern someone’s age online, as well as potential mitigation tools. FPF also outlines the privacy and accuracy tradeoffs of specific age assurance methods and technologies.
“Age assurance is highly contextual, and the most privacy-protective approach requires choosing a method, or sometimes multiple methods, in a layered approach, that is proportional to the risks of each specific use case,” said Jim Siegl, FPF Youth & Education Senior Technologist, and a co-author of the infographic. “If the potential for user harm associated with the service is high, a higher level of certainty may be appropriate. We hope this analysis and visualization of the key decision points will serve as a helpful guide as policymakers, regulators, service providers, and others continue to evaluate options and potential solutions.”
The analysis outlines the three categories of age assurance, finding that:
Age declaration, including age gate and parental consent/vouching, generally offers the lowest degree of privacy risks to the user and the lowest level of accuracy to the service provider. Parental consent provides more assurance than age-gating but may impact a teen’s autonomy.
Age estimation, such as facial characterization and other algorithmic estimation methods based on browsing history, voice, gait, or data points/signals from a VR game, can be particularly effective for determining if a user meets an age threshold (e.g., under 13 or 21+) but is less accurate within in narrow age ranges (e.g., determining if a user is 17 or 18).
Age verification, such as government ID plus biometrics or digital ID, is more appropriate for higher-risk, regulated, or age-restricted services, and it provides the greatest level of assurance but also poses the highest degree of privacy risks.
Balancing the privacy and equity implications of age assurance with the potential for harm to minors is an ongoing challenge for policymakers and online service providers. The infographic highlights some of those risks, including limiting legitimate access to online content, a loss of anonymity, limiting teen autonomy, and sensitive data collection and/or unexpected data uses. FPF also identifies some risk management tools, including on-device processing, data minimization, immediate deletion of ID data, and using a third party to separate data processing so that one company doesn’t control/access all of the data.
“Age assurance impacts all users on a service, not just minors. In addition to considering the tradeoffs around privacy and the potential for harm with each particular method, it’s important to balance those against the risk of barring access to content – especially if content restrictions have inequitable impacts,” said Bailey Sanchez, FPF Youth & Education Privacy Senior Counsel and a co-author of the infographic. “While age restrictions such as gambling are a matter of law, risk of harm is subjective – and that’s where things are starting to become really difficult. Different people make different calculations about potential risks posed by online gaming, for example.”
Unpacking Age Assurance builds on a series of new federal and state policy resources from FPF’s Youth & Education Privacy team. FPF recently released a report on verifiable parental consent, a form of age declaration and requirement of the Children’s Online Privacy Protection Act, and its analyses of new children’s privacy laws in Utah, California, Florida, and Connecticut highlight their respective approaches to age assurance.
On June 3rd, Connecticut Senate Bill 3 (SB 3), an “Act Concerning Online Privacy, Data and Safety Protections,” cleared the state legislature following unanimous votes in the House and Senate. If enacted by Governor Lamont, SB 3 will amend the Connecticut Data Privacy Act (CTDPA) to create new rights and protections for consumer health data and minors under the age of 18, and also make small-but-impactful amendments to existing provisions of the CTDPA. The bill also contains some standalone sections, such as a section requiring the operators of online dating services within the state to implement new safety features, including a mechanism to report “harmful or unwanted” behavior.
The children’s and health provisions of SB 3 appear to be informed by the California Age-Appropriate Design Code (AADC) and the recently enacted Washington State My Health, My Data Act, respectively, but contain numerous important distinctions. FPF has prepared a comparison chart to help stakeholders assess how SB 3’s youth privacy provisions compare to the California AADC. The provisions related to consumer health data will take effect on October 1, 2023, while the new requirements governing minors’ data and accounts will take effect a year later, on October 1, 2024.
New protections for youth online (Sections 7-13)
Sections 8-13 of SB 3 create new protections for youth online by expanding youth-specific protections to include teens up to 18, placing limits on certain data processing activities, and requiring services to assess risk to minors through data protection assessments. SB 3 appears to draw inspiration from the California Age-Appropriate Design Code Act’s (AADC) obligations and prohibitions but includes many divergences, which are assessed in further detail in a comparison chart. If enacted, these provisions will go into effect on October 1, 2024, with a right to cure until December 31, 2025. Additionally, Section 7 of the bill specifically regulates social media platforms and is largely focused on facilitating requests from a minor or minor’s parent to “unpublish” a minor’s social media account within 15 business days.
1. Scope
The obligations in Sections 8-13 will apply to controllers offering any online service, product, or feature to consumers whom the controller has actual knowledge, or wilfully disregards, are minors. “Minors” is defined as any consumers under 18, in line with recently-passed legislation in California and Florida. SB 3 borrows California AADC’s “online service, product, or feature” scope but retains the CTDPA’s “actual knowledge, or wilfully disregards” knowledge standard rather than the California AADC’s “likely to be accessed” standard. As written, it appears that the data protection and design obligations under the proposal would apply on an individualized basis to minors who the bill aims to protect, rather than governing the entire service. Additionally, there are also no affirmative age estimation requirements within the proposal, meaning that the scope of SB 3 is narrower than the California AADC because it only applies to controllers who have actual knowledge or willfully disregard that minors are using their service. These diversions may be in response to First Amendment objections raised in the Netchoice v. Bonta litigation seeking to strike down the California AADC.
2. Key obligations
SB 3 requires controllers to use reasonable care to avoid “any heightened risk of harm to minors” caused by their service. “Heightened risk of harm to minors” is defined to mean “processing minors’ personal data in a manner that presents any reasonably foreseeable risk of (A) any unfair or deceptive treatment of, or any unlawful disparate impact on minors, (B) any financial, physical or reputational injury to minors, or (C) any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of minors if such intrusion would be offensive to a reasonable person.” This requirement is reminiscent of the California AADC’s “material detriment” language, though “material detriment” and “harm” are undefined within the California AADC, and thus SB 3 may provide more clarity to controllers in scope.
Building off the data protection assessment requirements set forth in the CTDPA, SB 3 requires controllers to address (1) the purpose of the service, (2) the categories of minors’ personal data processed by the service, (3) the purpose of the data processing, and (4) any heightened risk of harm to minors that is a reasonably foreseeable result of offering the service. The bill specifically notes that a single data protection assessment may address a comparable set of processing operations that include similar activities. If controllers comply with the data protection assessment requirements of the bill, there is a rebuttable presumption in any enforcement action brought by the State AG that a controller used the reasonable care required to avoid heightened risk of harm to minors.
SB 3 includes several data processing limits that are subject to the consent of a minor or minor’s parent. While 2023 has seen the passage of legislation in other states requiring teens to receive parent consent, and thus treating all minors the same for purposes of exercising rights online, SB 3 allows for minors 13 and older to consent for themselves. Absent consent, controllers are prohibited from processing data not reasonably necessary to provide a service, retaining data for longer than necessary, and using any system design feature to “significantly increase, sustain or extend” a minor’s use of the service. Although data minimization is a key privacy principle found in most privacy proposals, it is atypical for this to be subject to consent. Targeted advertising and sale of a minor’s personal data are also subject to the consent of a minor or minor’s parent, expanding the CTDPA’s existing protections for teens that create opt-in requirements for the sale or processing for targeted advertising of data from teens 13-15.
In addition to the above limits subject to the consent of a minor, SB 3 creates new prohibitions for controllers offering services to minors. Like the California AADC, there are also limits on collecting precise geolocation information with a requirement to provide a signal when that information is being collected. While neither SB 3 nor the California AADC give guidance or further definition on “signal,” California AADC specifies an “obvious signal.” The bill also includes two design-related prohibitions: controllers are prohibited from providing any consent mechanisms designed to impact user autonomy or choice and are also prohibited from offering direct messaging without providing “readily accessible and easy-to-use safeguards” to limit the ability to receive messages from adults who the minor is not connected with.
New protections for consumer health data (Sections 1-6)
The CTDPA designates data revealing “health condition and diagnosis” information as a sensitive category of personal data subject to heightened protections, including an affirmative consent requirement for processing. SB 3 aims to expand the CTDPA’s protections for consumer health information by (1) creating a new sensitive data category under the CTDPA of “consumer health data,” (2) creating protections governing the collection and processing of “consumer health data,” applicable to a broad range of entities, and (3) establishing restrictions on the geofencing of healthcare facilities.
1. Definitions
If enacted, SB 3 will add eleven new health-related definitions to the CTDPA, including the terms “abortion,” “consumer health data,” “geofence,” “gender-affirming health data,” and “reproductive or sexual health data.” SB 3 is focused on establishing protections for “consumer health data,” defined as “any personal data that a controller uses to identify a consumer’s physical or mental health condition or diagnosis, and includes, but is not limited to, gender-affirming health data and reproductive or sexual health data” (emphasis added). This is a narrower definition of “consumer health data” than established under the Washington ‘My Health, My Data’ Act (MHMD), which applies to personal information that “identifies” a consumer’s health status, even if not used for a health-related purpose.
SB 3’s focus on “data used to identify physical or mental health condition or diagnosis” differs slightly from the CTDPA’s original protections for “data revealing mental or health condition or diagnosis” in that it centers on regulated entity use of data, rather than the nature of a data point. Data is subject to these new health data protections when an entity uses it to identify something about a consumer’s health, seemingly including through inference, whether or not that data “reveals” something about a consumer’s health on its face. In addition, SB 3’s definition of “consumer health data” explicitly includes “gender-affirming” and “reproductive and sexual” health information. It remains to be seen what the impact of distinction will be when the CTDPA takes effect.
2. Expanded Protections for the Collection and Processing of “Consumer Health Data”
SB 3 would create several protections exclusive to consumer health data that apply to “persons,” a category that includes non-profits and small businesses, which are otherwise excluded from coverage under the CTDPA. First, SB 3 requires that any employee or contractor with access to consumer health data shall be subject to either a contractual or statutory duty of confidentiality. In addition, the Act will forbid entities that collect and process consumer health data from selling that health data without prior consumer consent.
3. Restrictions on Geofencing
SB 3 follows MHMD in responding to concerns about the geofencing-facilitated digital harassment of individuals visiting abortion and gender-affirming care facilities post-Dobbs v. Jackson Women’s Health Organization by forbidding “persons” from geofencing mental, reproductive, or sexual health facilities for certain purposes. These purposes include the geofencing of health facilities conducted in order to (1) identify, (2) track, (3) collect data from, or (4) send health-related notifications to consumers. The act defines “geofence” broadly, as “any technology that uses global positioning coordinates, cell tower connectivity, cellular data, radio frequency identification, wireless fidelity technology data or any other form of location detection, or any combination of such coordinates, connectivity, data, identification or other form of location detection, to establish a virtual boundary.”
Other modifications to CTDPA
In addition to the substantive changes creating new consumer rights for consumer health data and youth data, SB 3 makes minor but meaningful changes to CTDPA. FPF observes 4 notable changes:
(1) “Data concerning an individual’s status as a victim of crime” is added to the “sensitive personal data” definition, perhaps inspired by pending legislation in Oregon.
(2) Consistent with other state privacy laws, Tribal nation government organizations and air carriers are carved out of scope of the CTDPA.
(3) The knowledge standard for processing youth data was modified from actual knowledge and wilfully disregards to actual knowledge or wilfully disregards. This amendment fixes a likely drafting error and aligns the CTDPA’s knowledge standard with the CCPA and Montana, strengthening privacy protections for children.
(4) Finally, SB 3 clarifies the Connecticut Attorney General may consider the “sensitivity of the data” involved in a violation of the CTDPA, along with other factors, when determining whether to grant a controller or consumer health data controller a right to cure.
Conclusion
Connecticut’s unanimous passage of SB 3 reflects the urgency of the new priorities around health and kids’ privacy that have permeated the 2023 legislative session. When these provisions take effect in October, the modified CTDPA will provide a template for other states that may wish to integrate protections for consumer health data within their comprehensive privacy laws, rather than passing standalone laws like MHMD. Similarly, Connecticut provides a template for states seeking to increase protections for youth online by first setting baseline standards for all consumers and then building off of that framework to create heightened protections for those under 18.
Shining a Light on the Florida Digital Bill of Rights
On May 4, 2023, the Florida ‘Digital Bill of Rights’ (SB 262) cleared the state legislature and now heads to the desk of the Governor for signature. SB 262 bears many similarities to the Washington Privacy Act and its progeny (specifically the Texas Data Privacy and Security Act). However, SB 262 is unique given its narrow scope of businesses regulated and other significant deviations from current trends in U.S. state privacy legislation, as well as its inclusion of a section in the style of Age-Appropriate Design Code (AADC) regulations but with broader application than the “comprehensive” parts of the bill. This blog highlights five unique and key features of the Florida Digital Bill of Rights:
1) SB 262 includes a section on “Protection of Children in Online Spaces”, which draws inspiration from the California AADC but diverges in many key aspects.
2) The scope of the comprehensive privacy provisions of SB 262 covers only businesses making $1 billion in revenue and meeting other threshold requirements.
3) SB 262 creates both familiar and novel consumer rights surrounding sensitive data and targeted advertising, raising compliance questions.
4) Under SB 262, controllers and processors will have new responsibilities including creating retention schedules and disclosure obligations for the sale of sensitive or biometric data.
5) Businesses regulated under SB 262 that utilize voice or face recognition, or have video or audio features in devices, will be subject to heightened restrictions for data collected through these services, regardless of whether the device can identify an individual.
Additionally, FPF is releasing a chart to help stakeholders assess how SB 262’s “Protections for Children in Online Spaces” compares to the California Age-Appropriate Design Code Act (California AADC).
1. The “Protection of Children in Online Spaces” Section Draws Inspiration from the California AADC but Diverges in Many Key Aspects
Many amendments were added to SB 262 at the eleventh hour, including several provisions on the ‘Protection of Children in Online Spaces’ (“Section 2”). FPF’s comparison chart assesses each requirement of Section 2 against the California AADC. Section 2 will govern a far broader set of covered entities than the bulk of SB 262’s provisions on privacy, and while it clearly incorporates language and concepts from the California AADC, it contains significant deviations in both scope and substance.
Scope of covered entities
The scope of entities subject to Section 2 is both broader and narrower than the California AADC. While the California AADC broadly applies to all online products, services, and features that are “likely to be accessed by children” under age 18, Section 2 only applies to “online platforms,” covering social media and online gaming platforms. The definition of “social media platform” includes “a form of electronic communication through which users create online communities or groups to share information, ideas, personal messages, and other content” and does not list any exemptions. “Online gaming platforms” is undefined. While seemingly narrower in scope than the California AADC, Section 2 contains no minimum revenue or user applicability thresholds, meaning that smaller businesses not subject to California’s law may be within scope. Additionally, it is possible that the scope of “social media platform” could encompass a number of non-obvious organizations, depending on how broadly the definition is construed.
No explicit DPIA or age estimation requirements
While Section 2 does not require a data protection impact assessment (DPIA) as required by the California AADC, it instead places a burden of proof on online platforms to demonstrate that processing personal information does not violate any of the law’s prohibitions. Covered platforms may therefore ultimately need to conduct a DPIA or similar assessment to meet this burden of proof.
Like the California AADC, Section 2 defines a child as an individual under 18, though, unlike the AADC, Section 2 does not affirmatively require age estimation. Section 2 also modifies the California AADC’s “likely to be accessed by children” standard to include predominantly likely to be accessed by children, but does not lay out any factors for assessing whether a service is likely to be accessed by children.
Prohibitions
Two key points on which Section 2 of SB 262 diverges from the California AADC are in the restrictions on processing and profiling.
Under Section 2, covered services may not process the personal information of a person under 18 if they have actual knowledge or willfully disregard that processing may result in “substantial harm or privacy risk to children.” The absence of affirmative age estimation requirements and the inclusion of an “actual knowledge or willfully disregard” knowledge standard modifier could be a response to First Amendment objections raised in the NetChoice v. Bonta litigation seeking to strike down the California AADC. The “substantial harm or privacy risk” language is reminiscent of California AADC’s prohibition on processing children’s data in a materially detrimental manner. However, while “material detriment” is undefined in California AADC, Section 2 defines “substantial harm or privacy risk” to include: mental health disorders; addictive behaviors; physical violence, online bullying and harassment; sexual exploitation; the promotion and marketing or tobacco, gambling, alcohol, or narcotic drugs; and predatory, unfair, or deceptive marketing practices or other financial harms.
Both the California AADC and Section 2 contain limits on profiling of people under 18 except in certain circumstances. While both contain an exception for when necessary to provide an online service, product, or feature, the California AADC contains an exemption if the business can demonstrate a “compelling reason that profiling is in the best interests of children.” In contrast, Section 2 contains an exemption if an online platform can demonstrate a compelling reason that profiling does not “pose a substantial harm or privacy risk to children.” It is possible that the affirmative showing required by the California AADC may be a higher threshold to meet than that of Section 2, especially given that the “best interests of children” standard is undefined and is not an established U.S. legal standard outside of the family law context. Furthermore, profiling is defined more broadly in Section 2 to include “any form of automated processing performed on personal information to evaluate, analyze, or predict personal aspects relating to the economic situation, health, personal preferences, interests, reliability, behavior, location, or movements of a child,” rather than “any form of automated processing of personal information to evaluate aspects relating to a person.”
2. The Digital Bill of Rights’ ‘Comprehensive’ Privacy Provisions Will Cover Very Few Businesses.
The types of entities subject to the remaining bulk of SB 262’s ‘comprehensive’ privacy provisions outside of Section 2 are much narrower than comparable U.S. state privacy laws, even the more limited ones. Florida SB 262 will only apply to a handful of companies that meet a threshold annual gross revenue requirement of $1 billion and either (1) make over 50% of revenue from targeted advertising, (2) operate a “consumer smart speaker and voice command component,” or (3) operate an app store with at least 250,000 software applications. This can be compared to recently enacted privacy laws in Iowa and Indiana, which will apply to businesses that either process personal data of at least 100,000 state residents or derive 50% of gross revenue from the sale of personal data of at least 25,000 consumers. Though the terms “targeted advertising” and “consumer smart speaker” in SB 262 could be construed liberally, the revenue requirement means that Floridans will not receive new rights or protections with respect to the vast majority of businesses that collect their personal data in the Sunshine State.
3. The Bill Creates A Complex Stack of both Familiar and Novel Consumer Rights
SB 262 will establish many rights that are now familiar from U.S. state privacy laws, including confirmation of processing, correction of inaccuracies, deletion, obtaining a copy of a person’s personal data in a portable format, and the ability to opt out of “solely” automated profiling in furtherance of decisions that produce legal or similarly significant effects. However, there are a number of new and unique provisions in the consumer rights sections:
Processing Sensitive Data Requires Both Opt-In and Opt-Out Consent: SB 262 provides that consumers have a right to opt out of the collection or processing of sensitive data. Notably, controllers are separately prohibited from processing sensitive data without first obtaining consumer “consent”. This diverges from other common frameworks that either require consent for processing sensitive data, such as the Connecticut Data Privacy Act, or only create an opt-out right, like Iowa.
Sale of Sensitive Data Also Requires Opt-In and Opt-Out Consent: Similar to Texas HB 4, SB 262 will require allbusinesses (including those that do not meet SB 262’s revenue thresholds) to obtain consent prior to the sale of sensitive data, while also providing consumers the general right to opt-out of the sale of any personal data. “Sale” is defined broadly to include transfers of personal data for “monetary or other valuable consideration” and lacks common exceptions, such as transfers as part of a merger or acquisition.
Targeted Advertising: SB 262 establishes a right to opt out of the processing of personal data for the purpose of targeted advertising. However, targeted advertising has a unique definition that includes personal data obtained over time across both affiliated and unaffiliated websites and online applications. While the Digital Bill of Rights will permit consumers to opt out of targeted advertising that relies on demonstrably ‘pseudonymous’ person information, by governing data collected across a company’s affiliated properties, it may apply more broadly than most other state privacy laws, which are focused on third-party advertisements.
Parents Exercising Rights on Behalf of Teens: SB 262 defines “child” as an individual under 18, diverging from other state consumer privacy laws and the federal Children’s Online Privacy Protection Act (COPPA). Unlike the California and Connecticut privacy laws, which created heightened opt-in rights for teens aged 13 to 15, SB 262 does not include any further distinctions between young children and teens, suggesting that parents or guardians will have sole responsibility to exercise consumer rights on behalf of youth up to age 18.
4. Controllers and Processors Will Have New Responsibilities for Purging Data and Disclosing Certain Practices
Unlike existing comprehensive state privacy laws, SB 262 would require that covered businesses and their processors implement a retention schedule for the deletion of personal data. The text of this provision appears influenced by the Illinois Biometric Information Privacy Act (BIPA). Under SB 262, controllers or processors may only retain personal data until (1) the initial purpose for the collection was satisfied; (2) the contract for which the data was collected or obtained is expired or terminated; or (3) two years after the consumer’s last interaction with the regulated business (subject to exceptions). However, unlike BIPA, SB 262 would not require that the retention schedule be made publicly available and would permit retention necessary to prevent or detect security incidents.
Further, in addition to the typical privacy notices required by state comprehensive laws, SB 262 creates two distinct disclosure requirements. First, again similar to Texas HB 4, if a controller sells sensitive or biometric data, they must provide the following notice: “NOTICE: This website may sell your [sensitive and/or biometric] personal data and/or biometric personal data.” Second, a controller that operates a search engine is required to disclose the main parameters in ranking results, “including the prioritization or deprioritization of political partisan or political ideology” in search results.
5. Businesses that Utilize Voice or Face Recognition, or Have Video or Audio Features in Devices, Have Particular but Perplexing Obligations
Finally, one of SB 262’s most unique provisions is a requirement that covered businesses may not provide consumer devices that engage in “surveillance” when not in active use unless “expressly authorized” by the consumer. Though “surveillance” and “active use” are not defined, the prohibition applies to devices that have any of the following features: voice recognition, facial recognition, video recording, audio recording, “or other electronic, visual, thermal, or olfactory feature” that collects data. SB 262 further fails to define “express authorization,” raising questions as to whether express authorization is analogous to “consent” under the bill, or if a higher standard will be required for express authorization, such as that required in the recently enacted Washington State “My Health, My Data” Act.
SB 262 further provides consumers with the right to opt out of personal data collected by voice or face recognition systems. Voice recognition is broadly defined as collecting, storing, analyzing, transmitting, and interpreting spoken words or other sounds – seemingly encompassing almost all audio-based consumer-facing systems. Facial recognition and the other features are not defined, though one can infer they would have a similarly broad definition as voice recognition. As a result, despite SB 262’s requirement that “biometric data” be used for unique identification of an individual in order to be subject to the legislation’s requirements for sensitive data, most general voice and face systems unrelated to identification will still need to provide consumers’ the ability to opt-out under these provisions. These restrictions and requirements may prove difficult for the functionality of some products that rely on these features, such as accessibility features that use natural language processing to transcribe spoken words. Moreover, despite SB 262’s revenue threshold, these prohibitions and restrictions will likely flow down to any other entity utilizing (or has a software plug-in to) voice assistant devices like Amazon Echo or Apple Siri for customer service, customer ordering, or other forms of user engagement through contractual agreements and requirements.
Conclusion
Given that many of the consumer rights and business obligations of SB 262 will directly apply to very few businesses, it is understandable why the Florida Digital Bill of Rights may have flown under the radar thus far. However, SB 262 is worth a close read, particularly the short-but-impactful section on “Protection of Children in Online Spaces” and provisions creating novel consumer rights. Given Governor DeSantis’ public support for the legislation, we can anticipate the Digital Bill of Rights will be enacted shortly and will go into effect on July 1, 2024–giving stakeholders just over a year to understand compliance obligations. We note, however, that the specific consumer rights and business obligations under SB 262 may evolve as the State Attorney General’s office is granted both mandatory and permissive rulemaking authority.
Utah Considers Proposals to Require Web Services to Verify Users’ Ages, Obtain Parental Consent to Process Teens’ Data
Update: On March 23, Governor Spencer Cox signed SB 152 and HB 311. While amendments were made to both bills, the concerns raised in FPF’s analysis remain. SB 152 leaves critical provisions, such as methods to verify age or obtain parental consent, to be established in further rulemaking, but questions remain regarding whether these can be done in a privacy-preserving manner. SB 152’s requirement that social media companies provide parents and guardians access to their teenager’s accounts, including messages, has raised security concerns andquestions about what sort of parental access mandates are appropriate forteens online.
Utah lawmakers are weighing legislation that would require social media companies to conduct age verification for all users and extend a parental consent requirement to teens.
The Utah legislature has introduced two similar, competing bills that seek to regulate online experiences for Utah users. SB 152 would require social media companies to verify the age of all Utah users and require parental consent for users under 18 to have an account. The bill would also require social media companies to provide a parent or guardian access to the content and interactions of an account held by a Utah resident under the age of 18. On February 13, SB 152 was amended to replace the prescriptive requirements for age verification (e.g. requirements that companies obtain and retain a government-issued ID) with verification methods established through rulemaking, but concerns remain that a rulemaking process could nonetheless require privacy-invasive age verification methods.
Utah HB 311, as originally introduced, would have required social media companies to verify the age of all Utah residents, require parental consent before users under 18 create an account, and would also prohibit social media companies from using “addictive” designs or features. On February 9, the bill was amended to remove the age verification and parental consent provisions; the provisions regarding design features remain, as does a private right of action. The amended bill passed the Utah House and moved to the Senate, where it will be considered alongside Utah SB 152.
FPF shared our analysis of these bills last week, focusing on three areas:
Parental consent under COPPA has longstanding challenges:
FPF published an analysis and accompanying infographic regarding verifiable parental consent (VPC) under COPPA, which was informed by research and insights from parents, COPPA experts, industry leaders, and other stakeholders. The white paper and infographic highlight key friction points that emerge in the VPC process, including:
Efficacy: It can be difficult to distinguish between children and adults online, and it is harder still to establish whether a particular child is related to a particular adult. While the approved methods under VPC may confirm someone is an adult, they do not confirm whether that adult is a parent or guardian of a child.
Privacy and security: Parents often do not feel comfortable sharing sensitive information, such as their credit card or ID information, and having that information linked to their child’s presence online.
Age verification requires additional data collection:
As written, Utah’s proposed legislation would require companies to affirmatively verify the age of all Utah residents. A key pillar of privacy legislation and best practices is the principle of data minimization and not collecting information beyond what is necessary to provide a service. Requiring social media companies or their agents to collect this data would increase the risk of identity theft resulting from a data breach. We also note that since some social media companies are based outside of the United States (with some located in jurisdictions that have few effective privacy rules), there is an inherent security risk in the increased collection of sensitive data for age verification purposes.
Additionally, as written, Utah’s proposed legislation specifies that ages must be verified without a definition of what “verify” means. Companies would benefit from clarity on whether age verification or age estimation is required. An example of age estimation might include capturing a “selfie” of a user to estimate the user’s age range. Verifying someone’s exact age almost always requires increased data collection compared with estimating an age range or age bracket. Some of the current age estimation technology can accurately distinguish a 12 year old from someone over 25, resulting in a much smaller number of users that would be required to provide sensitive identity documentation. Although methods of verification and forms or methods of identification will be established by further administrative rulemaking, compliance with the proposed legislation as written may still necessitate companies to require government-issued ID to access their services.
Protecting children and teens online should include increasing privacy protections:
FPF knows that children and teens deserve privacy protections and has highlighted Utah’s leadership in this space, notably in the education context. However, a one-size-fits-all approach may not be appropriate given developmental differences between young children and teens. Similar to how children under 13 can access services with safeguards under COPPA, teens stand to derive benefit from online services such as socializing with peers, distant family, and communities. Utah’s legislation proposes to restrict access to services rather than enhancing privacy protections on these services. Enhanced privacy protections could not only benefit children, but could benefit adults as well. Because many parents may ultimately choose to provide consent, it is important to consider how privacy protections could be implemented on online services.
These youth-focused proposals follow last year’s passage of the Utah Consumer Privacy Act – a comprehensive privacy law that created some new rights for Utah residents, but provides fewer protections than other state frameworks. Adding privacy protections for young people would not just help Utah align with other states but also would address several of the privacy risks the social media bills would create. Examples of privacy protective provisions include:
Classifying children’s and teens’ data as sensitive data and restricting the sale or use of children and teens’ data for targeted advertising by default;
Adding provisions requiring data minimization, restrictions on secondary uses, or a prohibition against processing personal data in violation of state and federal anti-discrimination laws; and
Providing all consumers with a right to opt out of profiling.