Youth Privacy in Immersive Technologies: Regulatory Guidance, Lessons Learned, and Remaining Uncertainties

As young people adopt immersive technologies like extended reality (XR) and virtual world applications, companies are expanding their presence in digital spaces, launching brand experiences, advertisements, and digital products. While virtual worlds may in some ways resemble traditional social media and gaming experiences, they may also collect more data and raise potential manipulation risks, particularly for vulnerable and impressionable young people.

This policy brief analyzes recent regulatory and self-regulatory actions and guidance related to youth privacy, safety, and advertising in immersive spaces, pulling out key lessons for organizations building experiences in virtual worlds.

Recent FTC Enforcement Actions and Guidance

The Federal Trade Commission (FTC) has shown a strong interest in using its consumer protection authority to bring enforcement actions against a wide range of digital companies for alleged “unfair and deceptive” practices, rule violations, and other unlawful conduct. The Commission has also issued several policy statements and guidance documents relevant to organizations building immersive technologies, touching on issues such as biometric data and advertising to children. It is clear the agency is thinking seriously about how its authority could apply in emerging sectors like AI, and organizations working on immersive technologies should take heed. Lessons from recent FTC privacy cases and guidance include:

Self-Regulatory Cases and Safe Harbor Guidance

Self-regulatory bodies also have an essential role in ensuring privacy and safety in child-directed applications and providing guidance to companies operating in the space. For example, organizations designated as COPPA Safe Harbors can guide companies toward compliant, developmentally appropriate, and privacy-protecting practices. Lessons from recent self-regulatory cases and Safe Harbor guidance include:

Remaining Areas of Uncertainty

Because immersive technologies are relatively new and evolve rapidly, much of the existing regulatory and self-regulatory guidance is pulled from other contexts. Therefore, questions remain about how regulations apply in immersive environments and how to operationalize best practices. These questions include:

FPF Files COPPA Comments with the Federal Trade Commission

Today, the Future of Privacy Forum (FPF) filed comments with the Federal Trade Commission (Commission) in response to its request for comment on the Children’s Online Privacy Protection Act (COPPA) proposed rule.

Read our comments in full.

As technology evolves, so must the regulations designed to protect children online, and FPF commends the Commission’s efforts to strengthen COPPA. In our comments, we outlined a number of recommendations and considerations that seek to further refine and update the proposed rule, from how it would interact with multiple provisions of a key student privacy law to the potential implications of a proposed secondary verifiable parental consent requirement. 

To amplify the questions about how COPPA would interact with the Family Educational Rights and Privacy Act (FERPA), FPF was also one of 12 signatories to a multistakeholder letter addressed to the Commission and Department of Education urging the development of joint guidance.

Read the letter here.

Considerations Applicable to All Operators

Children today are increasingly reliant on online services to connect with peers, seek out entertainment, or engage in educational activities, and while there is a great benefit to this, there are also risks to privacy and personal data protection, and we applaud the Commission for its ongoing efforts to find a balance between these tradeoffs. Our comments and recommendations focused on areas where we believe there is further opportunity to strike that balance, including:

Unique Considerations for Schools and Educational Technology

FPF commends the Commission’s effort to provide better clarity regarding how the rule should be applied in a school context; however, there are several areas where the proposed rule does not fully align with the Family Educational Rights and Privacy Act (FERPA), the primary federal law that governs use and disclosure of educational information. Both laws are complex, and the potential impact of confusion and misalignment is significant for the more than 13,000 school districts across the country and for the edtech vendor community.

With that in mind, our comments related to the proposed rule’s implications for student privacy focused in large part on identifying areas where more alignment and clarity around the interaction between COPPA and FERPA would be particularly instructive for both schools and edtech companies. Our recommendations include:

To read FPF’s COPPA comments in full, click here.

To download the joint letter to the FTC and U.S. Department of Education signed by FPF and 11 others, click here.

FPF and The Dialogue Release Collaboration on a Catalog of Measures for “Verifiably safe” Processing of Children’s Personal Data under India’s DPDPA 2023

Today, the Future of Privacy Forum (FPF) and The Dialogue released a Brief containing a Catalog of Measures for “Verifiably Safe” Processing of Children’s Personal Data Under India’s Digital Personal Data Protection Act (DPDPA) 2023

When India’s DPDPA passed in August, it created heightened protections for the processing of personal data of children up to 18. When the law goes into effect, entities who determine the purpose and means of processing data, known as “data fiduciaries,” will need to apply these heightened protections to children’s data. Under the DPDPA, there is no further distinguishing between age groups of children, and all protections, such as obtaining parental consent before processing a child’s data, will apply to all children up to 18. However, the DPDPA stipulates that if the processing of personal data of children is done “in a manner that is verifiably safe,” the Indian government has the competence to lower the age above which data fiduciaries may be exempt from certain obligations. 

In partnership with The Dialogue, an emerging research and public-policy think-tank based in New Delhi with a vision to drive a progressive narrative in India’s policy discourse, FPF prepared a Brief compiling a catalog of measures that may be deemed “verifiably safe” when processing children’s personal data. The Brief was informed by best practices and accepted approaches from key jurisdictions with experience in implementing data protection legal obligations geared towards children. Not all of these measures may immediately apply to all industry stakeholders. 

While the concept of “verifiably safe” processing of children’s personal data is unique to the DPDPA and not found in other data protection regimes, the Brief’s catalog of measures can aid practitioners and policymakers across the globe.

The Brief outlines the following measures that can amount to “verifiably safe” processing of personal data of children, proposing additional context and actionable criteria for each item: 


1. Ensure enhanced transparency and digital literacy for children

2. Ensure enhanced transparency and digital literacy for parents and lawful guardians of very young users.

3. Opt for informative push notifications and provide tools for children concerning privacy settings and reporting mechanisms. 

4. Provide parents or lawful guardians with tools to view, and in some cases set, children’s privacy settings and exercise privacy rights. 

5. Set account settings as “privacy friendly” by default

6. Limit advertising to children.

7. Maintain the functionality of a service at all times, considering the best interests of children.

8. Adopt policies to limit the collection and sharing of children’s data. 

9. Consider all risks of processing their personal data for children and their best interests via thorough assessments

10. Ensure the accuracy of the personal data of children held. 

11. Use and retain personal data of children considering their best interests.

12. Adopt policies regarding how children’s data may be safely shared.  

13. Give children options in an objective and neutral way, avoiding deceptive language or design.

14. Put in place robust internal policies and procedures for processing personal data of children and prioritize staff training.

15. Enhance accountability for data breaches through notifying the parents or lawful guardians and adopting internal policies such as Voluntary Undertaking if a data breach occurs. 

16. Conduct specific due diligence with regard to children’s personal data when engaging processors.  

We encourage further conversation between government, industry, privacy experts, and representatives of children, parents, and lawful guardians to identify which practices and measures may suit specific types of services and industries, or specific categories of data fiduciaries. 

New FPF Infographic Analyzes Age Assurance Technology & Privacy Tradeoffs

As a growing number of federal and state children’s online privacy and safety proposals seek to age-restrict social media and other online experiences, FPF released a new infographic, Unpacking Age Assurance: Technologies and Tradeoffs. The infographic analyzes the risks and potential harms associated with attempting to discern someone’s age online, as well as potential mitigation tools. FPF also outlines the privacy and accuracy tradeoffs of specific age assurance methods and technologies. 

Click here to view the infographic. 

“Age assurance is highly contextual, and the most privacy-protective approach requires choosing a method, or sometimes multiple methods, in a layered approach, that is proportional to the risks of each specific use case,” said Jim Siegl, FPF Youth & Education Senior Technologist, and a co-author of the infographic. “If the potential for user harm associated with the service is high, a higher level of certainty may be appropriate. We hope this analysis and visualization of the key decision points will serve as a helpful guide as policymakers, regulators, service providers, and others continue to evaluate options and potential solutions.”

The analysis outlines the three categories of age assurance, finding that:

Balancing the privacy and equity implications of age assurance with the potential for harm to minors is an ongoing challenge for policymakers and online service providers. The infographic highlights some of those risks, including limiting legitimate access to online content, a loss of anonymity, limiting teen autonomy, and sensitive data collection and/or unexpected data uses. FPF also identifies some risk management tools, including on-device processing, data minimization, immediate deletion of ID data, and using a third party to separate data processing so that one company doesn’t control/access all of the data.

“Age assurance impacts all users on a service, not just minors. In addition to considering the tradeoffs around privacy and the potential for harm with each particular method, it’s important to balance those against the risk of barring access to content – especially if content restrictions have inequitable impacts,” said Bailey Sanchez, FPF Youth & Education Privacy Senior Counsel and a co-author of the infographic. “While age restrictions such as gambling are a matter of law, risk of harm is subjective – and that’s where things are starting to become really difficult. Different people make different calculations about potential risks posed by online gaming, for example.”

Unpacking Age Assurance builds on a series of new federal and state policy resources from FPF’s Youth & Education Privacy team. FPF recently released a report on verifiable parental consent, a form of age declaration and requirement of the Children’s Online Privacy Protection Act, and its analyses of new children’s privacy laws in Utah, California, Florida, and Connecticut highlight their respective approaches to age assurance. 

screenshot 2023 06 26 at 9.26.43 am

To access the Youth & Ed team’s child and student privacy resources, visit www.StudentPrivacyCompass.org and follow the team on Twitter at @SPrivacyCompass.

Connecticut Shows You Can Have It All

On June 3rd, Connecticut Senate Bill 3 (SB 3), an “Act Concerning Online Privacy, Data and Safety Protections,” cleared the state legislature following unanimous votes in the House and Senate. If enacted by Governor Lamont, SB 3 will amend the Connecticut Data Privacy Act (CTDPA) to create new rights and protections for consumer health data and minors under the age of 18, and also make small-but-impactful amendments to existing provisions of the CTDPA. The bill also contains some standalone sections, such as a section requiring the operators of online dating services within the state to implement new safety features, including a mechanism to report “harmful or unwanted” behavior.

The children’s and health provisions of SB 3 appear to be informed by the California Age-Appropriate Design Code (AADC) and the recently enacted Washington State My Health, My Data Act, respectively, but contain numerous important distinctions.  FPF has prepared a comparison chart to help stakeholders assess how SB 3’s youth privacy provisions compare to the California AADC. The provisions related to consumer health data will take effect on October 1, 2023, while the new requirements governing minors’ data and accounts will take effect a year later, on October 1, 2024.

New protections for youth online (Sections 7-13)

Sections 8-13 of SB 3 create new protections for youth online by expanding youth-specific protections to include teens up to 18, placing limits on certain data processing activities, and requiring services to assess risk to minors through data protection assessments. SB 3 appears to draw inspiration from the California Age-Appropriate Design Code Act’s (AADC) obligations and prohibitions but includes many divergences, which are assessed in further detail in a comparison chart. If enacted, these provisions will go into effect on October 1, 2024, with a right to cure until December 31, 2025. Additionally, Section 7 of the bill specifically regulates social media platforms and is largely focused on facilitating requests from a minor or minor’s parent to “unpublish” a minor’s social media account within 15 business days.

1. Scope

The obligations in Sections 8-13 will apply to controllers offering any online service, product, or feature to consumers whom the controller has actual knowledge, or wilfully disregards, are minors. “Minors” is defined as any consumers under 18, in line with recently-passed legislation in California and Florida. SB 3 borrows California AADC’s “online service, product, or feature” scope but retains the CTDPA’s “actual knowledge, or wilfully disregards” knowledge standard rather than the California AADC’s “likely to be accessed” standard. As written, it appears that the data protection and design obligations under the proposal would apply on an individualized basis to minors who the bill aims to protect, rather than governing the entire service. Additionally, there are also no affirmative age estimation requirements within the proposal, meaning that the scope of SB 3 is narrower than the California AADC because it only applies to controllers who have actual knowledge or willfully disregard that minors are using their service. These diversions may be in response to First Amendment objections raised in the Netchoice v. Bonta litigation seeking to strike down the California AADC.

2. Key obligations

SB 3 requires controllers to use reasonable care to avoid “any heightened risk of harm to minors” caused by their service. “Heightened risk of harm to minors” is defined to mean “processing minors’ personal data in a manner that presents any reasonably foreseeable risk of (A) any unfair or deceptive treatment of, or any unlawful disparate impact on minors, (B) any financial, physical or reputational injury to minors, or (C) any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of minors if such intrusion would be offensive to a reasonable person.” This requirement is reminiscent of the California AADC’s “material detriment” language, though “material detriment” and “harm” are undefined within the California AADC, and thus SB 3 may provide more clarity to controllers in scope.

Building off the data protection assessment requirements set forth in the CTDPA, SB 3 requires controllers to address (1) the purpose of the service, (2) the categories of minors’ personal data processed by the service, (3) the purpose of the data processing, and (4) any heightened risk of harm to minors that is a reasonably foreseeable result of offering the service. The bill specifically notes that a single data protection assessment may address a comparable set of processing operations that include similar activities. If controllers comply with the data protection assessment requirements of the bill, there is a rebuttable presumption in any enforcement action brought by the State AG that a controller used the reasonable care required to avoid heightened risk of harm to minors.

SB 3 includes several data processing limits that are subject to the consent of a minor or minor’s parent. While 2023 has seen the passage of legislation in other states requiring teens to receive parent consent, and thus treating all minors the same for purposes of exercising rights online, SB 3 allows for minors 13 and older to consent for themselves. Absent consent, controllers are prohibited from processing data not reasonably necessary to provide a service, retaining data for longer than necessary, and using any system design feature to “significantly increase, sustain or extend” a minor’s use of the service. Although data minimization is a key privacy principle found in most privacy proposals, it is atypical for this to be subject to consent. Targeted advertising and sale of a minor’s personal data are also subject to the consent of a minor or minor’s parent, expanding the CTDPA’s existing protections for teens that create opt-in requirements for the sale or processing for targeted advertising of data from teens 13-15. 

In addition to the above limits subject to the consent of a minor, SB 3 creates new prohibitions for  controllers offering services to minors. Like the California AADC, there are also limits on collecting precise geolocation information with a requirement to provide a signal when that information is being collected. While neither SB 3 nor the California AADC give guidance or further definition on “signal,” California AADC specifies an “obvious signal.” The bill also includes two design-related prohibitions: controllers are prohibited from providing any consent mechanisms designed to impact user autonomy or choice and are also prohibited from offering direct messaging without providing “readily accessible and easy-to-use safeguards” to limit the ability to receive messages from adults who the minor is not connected with. 

New protections for consumer health data (Sections 1-6)

The CTDPA designates data revealing “health condition and diagnosis” information as a sensitive category of personal data subject to heightened protections, including an affirmative consent requirement for processing. SB 3 aims to expand the CTDPA’s protections for consumer health information by (1) creating a new sensitive data category under the CTDPA of “consumer health data,” (2) creating protections governing the collection and processing of “consumer health data,” applicable to a broad range of entities, and (3) establishing restrictions on the geofencing of healthcare facilities.

1. Definitions

If enacted, SB 3 will add eleven new health-related definitions to the CTDPA, including the terms “abortion,” “consumer health data,” “geofence,” “gender-affirming health data,” and “reproductive or sexual health data.” SB 3 is focused on establishing protections for “consumer health data,” defined as “any personal data that a controller uses to identify a consumer’s physical or mental health condition or diagnosis, and includes, but is not limited to, gender-affirming health data and reproductive or sexual health data” (emphasis added). This is a narrower definition of “consumer health data” than established under the Washington ‘My Health, My Data’ Act (MHMD), which applies to personal information that “identifies” a consumer’s health status, even if not used for a health-related purpose. 

SB 3’s focus on “data used to identify physical or mental health condition or diagnosis” differs slightly from the CTDPA’s original protections for “data revealing mental or health condition or diagnosis” in that it centers on regulated entity use of data, rather than the nature of a data point. Data is subject to these new health data protections when an entity uses it to identify something about a consumer’s health, seemingly including through inference, whether or not that data “reveals” something about a consumer’s health on its face. In addition, SB 3’s definition of “consumer health data” explicitly includes “gender-affirming” and “reproductive and sexual” health information. It remains to be seen what the impact of distinction will be when the CTDPA takes effect.

2. Expanded Protections for the Collection and Processing of “Consumer Health Data”

SB 3 would create several protections exclusive to consumer health data that apply to “persons,” a category that includes non-profits and small businesses, which are otherwise excluded from coverage under the CTDPA. First, SB 3 requires that any employee or contractor with access to consumer health data shall be subject to either a contractual or statutory duty of confidentiality. In addition, the Act will forbid entities that collect and process consumer health data from selling that health data without prior consumer consent.

3. Restrictions on Geofencing

SB 3 follows MHMD in responding to concerns about the geofencing-facilitated digital harassment of individuals visiting abortion and gender-affirming care facilities post-Dobbs v. Jackson Women’s Health Organization by forbidding “persons” from geofencing mental, reproductive, or sexual health facilities for certain purposes. These purposes include the geofencing of health facilities conducted in order to (1) identify, (2) track, (3) collect data from, or (4) send health-related notifications to consumers. The act defines “geofence” broadly, as “any technology that uses global positioning coordinates, cell tower connectivity, cellular data, radio frequency identification, wireless fidelity technology data or any other form of location detection, or any combination of such coordinates, connectivity, data, identification or other form of location detection, to establish a virtual boundary.”

Other modifications to CTDPA

In addition to the substantive changes creating new consumer rights for consumer health data and youth data, SB 3 makes minor but meaningful changes to CTDPA. FPF observes 4 notable changes:

(1) “Data concerning an individual’s status as a victim of crime” is added to the “sensitive personal data” definition, perhaps inspired by pending legislation in Oregon.

(2) Consistent with other state privacy laws, Tribal nation government organizations and air carriers are carved out of scope of the CTDPA.

(3) The knowledge standard for processing youth data was modified from actual knowledge and wilfully disregards to actual knowledge or wilfully disregards. This amendment fixes a likely drafting error and aligns the CTDPA’s knowledge standard with the CCPA and Montana, strengthening privacy protections for children. 

(4) Finally, SB 3 clarifies the Connecticut Attorney General may consider the “sensitivity of the data” involved in a violation of the CTDPA, along with other factors, when determining whether to grant a controller or consumer health data controller a right to cure.

Conclusion

Connecticut’s unanimous passage of SB 3 reflects the urgency of the new priorities around health and kids’ privacy that have permeated the 2023 legislative session. When these provisions take effect in October, the modified CTDPA will provide a template for other states that may wish to integrate protections for consumer health data within their comprehensive privacy laws, rather than passing standalone laws like MHMD. Similarly, Connecticut provides a template for states seeking to increase protections for youth online by first setting baseline standards for all consumers and then building off of that framework to create heightened protections for those under 18.

Shining a Light on the Florida Digital Bill of Rights

On May 4, 2023, the Florida ‘Digital Bill of Rights’ (SB 262) cleared the state legislature and now heads to the desk of the Governor for signature. SB 262 bears many similarities to the Washington Privacy Act and its progeny (specifically the Texas Data Privacy and Security Act). However, SB 262 is unique given its narrow scope of businesses regulated and other significant deviations from current trends in U.S. state privacy legislation, as well as its inclusion of a section in the style of Age-Appropriate Design Code (AADC) regulations but with broader application than the “comprehensive” parts of the bill. This blog highlights five unique and key features of the Florida Digital Bill of Rights: 

1) SB 262 includes a section on “Protection of Children in Online Spaces”, which draws inspiration from the California AADC but diverges in many key aspects.

2) The scope of the comprehensive privacy provisions of SB 262 covers only businesses making $1 billion in revenue and meeting other threshold requirements. 

3) SB 262 creates both familiar and novel consumer rights surrounding sensitive data and targeted advertising, raising compliance questions. 

4) Under SB 262, controllers and processors will have new responsibilities including creating retention schedules and disclosure obligations for the sale of sensitive or biometric data. 

5) Businesses regulated under SB 262 that utilize voice or face recognition, or have video or audio features in devices, will be subject to heightened restrictions for data collected through these services, regardless of whether the device can identify an individual.

Additionally, FPF is releasing a chart to help stakeholders assess how SB 262’s “Protections for Children in Online Spaces” compares to the California Age-Appropriate Design Code Act (California AADC).

1. The “Protection of Children in Online Spaces” Section Draws Inspiration from the California AADC but Diverges in Many Key Aspects

Many amendments were added to SB 262 at the eleventh hour, including several provisions on the ‘Protection of Children in Online Spaces’ (“Section 2”). FPF’s comparison chart assesses each requirement of Section 2 against the California AADC. Section 2 will govern a far broader set of covered entities than the bulk of SB 262’s provisions on privacy, and while it clearly incorporates language and concepts from the California AADC, it contains significant deviations in both scope and substance.

Scope of covered entities

The scope of entities subject to Section 2 is both broader and narrower than the California AADC. While the California AADC broadly applies to all online products, services, and features that are “likely to be accessed by children” under age 18, Section 2 only applies to “online platforms,”  covering social media and online gaming platforms. The definition of “social media platform” includes “a form of electronic communication through which users create online communities or groups to share information, ideas, personal messages, and other content” and does not list any exemptions. “Online gaming platforms” is undefined. While seemingly narrower in scope than the California AADC, Section 2 contains no minimum revenue or user applicability thresholds, meaning that smaller businesses not subject to California’s law may be within scope. Additionally, it is possible that the scope of “social media platform” could encompass a number of non-obvious organizations, depending on how broadly the definition is construed.

No explicit DPIA or age estimation requirements

While Section 2 does not require a data protection impact assessment (DPIA) as required by the California AADC, it instead places a burden of proof on online platforms to demonstrate that processing personal information does not violate any of the law’s prohibitions. Covered platforms may therefore ultimately need to conduct a DPIA or similar assessment to meet this burden of proof.

Like the California AADC, Section 2 defines a child as an individual under 18, though, unlike the AADC, Section 2 does not affirmatively require age estimation. Section 2 also modifies the California AADC’s “likely to be accessed by children” standard to include predominantly likely to be accessed by children, but does not lay out any factors for assessing whether a service is likely to be accessed by children.

Prohibitions

Two key points on which Section 2 of SB 262 diverges from the California AADC are in the restrictions on processing and profiling.

Under Section 2, covered services may not process the personal information of a person under 18 if they have actual knowledge or willfully disregard that processing may result in “substantial harm or privacy risk to children.” The absence of affirmative age estimation requirements and the inclusion of an “actual knowledge or willfully disregard” knowledge standard modifier could be a response to First Amendment objections raised in the NetChoice v. Bonta litigation seeking to strike down the California AADC. The “substantial harm or privacy risk” language is reminiscent of California AADC’s prohibition on processing children’s data in a materially detrimental manner. However, while “material detriment” is undefined in California AADC, Section 2 defines “substantial harm or privacy risk” to include: mental health disorders; addictive behaviors; physical violence, online bullying and harassment; sexual exploitation; the promotion and marketing or tobacco, gambling, alcohol, or narcotic drugs; and predatory, unfair, or deceptive marketing practices or other financial harms.

Both the California AADC and Section 2 contain limits on profiling of people under 18 except in certain circumstances. While both contain an exception for when necessary to provide an online service, product, or feature, the California AADC contains an exemption if the business can demonstrate a “compelling reason that profiling is in the best interests of children.” In contrast, Section 2 contains an exemption if an online platform can demonstrate a compelling reason that profiling does not “pose a substantial harm or privacy risk to children.” It is possible that the affirmative showing required by the California AADC may be a higher threshold to meet than that of Section 2, especially given that the “best interests of children” standard is undefined and is not an established U.S. legal standard outside of the family law context. Furthermore, profiling is defined more broadly in Section 2 to include “any form of automated processing performed on personal information to evaluate, analyze, or predict personal aspects relating to the economic situation, health, personal preferences, interests, reliability, behavior, location, or movements of a child,” rather than “any form of automated processing of personal information to evaluate aspects relating to a person.”

2. The Digital Bill of Rights’ ‘Comprehensive’ Privacy Provisions Will Cover Very Few Businesses.

The types of entities subject to the remaining bulk of SB 262’s ‘comprehensive’ privacy provisions outside of Section 2 are much narrower than comparable U.S. state privacy laws, even the more limited ones. Florida SB 262 will only apply to a handful of companies that meet a threshold annual gross revenue requirement of $1 billion and either (1) make over 50% of revenue from targeted advertising, (2) operate a “consumer smart speaker and voice command component,” or (3) operate an app store with at least 250,000 software applications. This can be compared to recently enacted privacy laws in Iowa and Indiana, which will apply to businesses that either process personal data of at least 100,000 state residents or derive 50% of gross revenue from the sale of personal data of at least 25,000 consumers. Though the terms “targeted advertising” and “consumer smart speaker” in SB 262 could be construed liberally, the revenue requirement means that Floridans will not receive new rights or protections with respect to the vast majority of businesses that collect their personal data in the Sunshine State.

3. The Bill Creates A Complex Stack of both Familiar and Novel Consumer Rights 

SB 262 will establish many rights that are now familiar from U.S. state privacy laws, including confirmation of processing, correction of inaccuracies, deletion, obtaining a copy of a person’s personal data in a portable format, and the ability to opt out of “solely” automated profiling in furtherance of decisions that produce legal or similarly significant effects. However, there are a number of new and unique provisions in the consumer rights sections: 

4. Controllers and Processors Will Have New Responsibilities for Purging Data and Disclosing Certain Practices

Unlike existing comprehensive state privacy laws, SB 262 would require that covered businesses and their processors implement a retention schedule for the deletion of personal data. The text of this provision appears influenced by the Illinois Biometric Information Privacy Act (BIPA). Under SB 262, controllers or processors may only retain personal data until (1) the initial purpose for the collection was satisfied; (2) the contract for which the data was collected or obtained is expired or terminated; or (3) two years after the consumer’s last interaction with the regulated business (subject to exceptions). However, unlike BIPA, SB 262 would not require that the retention schedule be made publicly available and would permit retention necessary to prevent or detect security incidents.

Further, in addition to the typical privacy notices required by state comprehensive laws, SB 262 creates two distinct disclosure requirements. First, again similar to Texas HB 4, if a controller sells sensitive or biometric data, they must provide the following notice: “NOTICE: This website may sell your [sensitive and/or biometric] personal data and/or biometric personal data.” Second, a controller that operates a search engine is required to disclose the main parameters in ranking results, “including the prioritization or deprioritization of political partisan or political ideology” in search results.

5. Businesses that Utilize Voice or Face Recognition, or Have Video or Audio Features in Devices, Have Particular but Perplexing Obligations

Finally, one of SB 262’s most unique provisions is a requirement that covered businesses may not provide consumer devices that engage in “surveillance” when not in active use unless “expressly authorized” by the consumer. Though “surveillance” and “active use” are not defined, the prohibition applies to devices that have any of the following features: voice recognition, facial recognition, video recording, audio recording, “or other electronic, visual, thermal, or olfactory feature” that collects data. SB 262 further fails to define “express authorization,” raising questions as to whether express authorization is analogous to “consent” under the bill, or if a higher standard will be required for express authorization, such as that required in the recently enacted Washington State “My Health, My Data” Act.

SB 262 further provides consumers with the right to opt out of personal data collected by voice or face recognition systems. Voice recognition is broadly defined as collecting, storing, analyzing, transmitting, and interpreting spoken words or other sounds – seemingly encompassing almost all audio-based consumer-facing systems. Facial recognition and the other features are not defined, though one can infer they would have a similarly broad definition as voice recognition. As a result, despite SB 262’s requirement that “biometric data” be used for unique identification of an individual in order to be subject to the legislation’s requirements for sensitive data, most general voice and face systems unrelated to identification will still need to provide consumers’ the ability to opt-out under these provisions. These restrictions and requirements may prove difficult for the functionality of some products that rely on these features, such as accessibility features that use natural language processing to transcribe spoken words. Moreover, despite SB 262’s revenue threshold, these prohibitions and restrictions will likely flow down to any other entity utilizing (or has a software plug-in to) voice assistant devices like Amazon Echo or Apple Siri for customer service, customer ordering, or other forms of user engagement through contractual agreements and requirements.

Conclusion

Given that many of the consumer rights and business obligations of SB 262 will directly apply to very few businesses, it is understandable why the Florida Digital Bill of Rights may have flown under the radar thus far. However, SB 262 is worth a close read, particularly the short-but-impactful section on “Protection of Children in Online Spaces” and provisions creating novel consumer rights. Given Governor DeSantis’ public support for the legislation, we can anticipate the Digital Bill of Rights will be enacted shortly and will go into effect on July 1, 2024–giving stakeholders just over a year to understand compliance obligations. We note, however, that the specific consumer rights and business obligations under SB 262 may evolve as the State Attorney General’s office is granted both mandatory and permissive rulemaking authority. 

Utah Considers Proposals to Require Web Services to Verify Users’ Ages, Obtain Parental Consent to Process Teens’ Data

Update: On March 23, Governor Spencer Cox signed SB 152 and HB 311. While amendments were made to both bills, the concerns raised in FPF’s analysis remain. SB 152 leaves critical provisions, such as methods to verify age or obtain parental consent, to be established in further rulemaking, but questions remain regarding whether these can be done in a privacy-preserving manner. SB 152’s requirement that social media companies provide parents and guardians access to their teenager’s accounts, including messages, has raised security concerns and questions about what sort of parental access mandates are appropriate for teens online.

Utah lawmakers are weighing legislation that would require social media companies to conduct age verification for all users and extend a parental consent requirement to teens. 

The Utah legislature has introduced two similar, competing bills that seek to regulate online experiences for Utah users. SB 152 would require social media companies to verify the age of all Utah users and require parental consent for users under 18 to have an account. The bill would also require social media companies to provide a parent or guardian access to the content and interactions of an account held by a Utah resident under the age of 18. On February 13, SB 152 was amended to replace the prescriptive requirements for age verification (e.g. requirements that companies obtain and retain a government-issued ID) with verification methods established through rulemaking, but concerns remain that a rulemaking process could nonetheless require privacy-invasive age verification methods.

Utah HB 311, as originally introduced, would have required social media companies to verify the age of all Utah residents, require parental consent before users under 18 create an account, and would also prohibit social media companies from using “addictive” designs or features. On February 9, the bill was amended to remove the age verification and parental consent provisions; the provisions regarding design features remain, as does a private right of action. The amended bill passed the Utah House and moved to the Senate, where it will be considered alongside Utah SB 152. 

FPF shared our analysis of these bills last week, focusing on three areas:

Parental consent under COPPA has longstanding challenges:

FPF published an analysis and accompanying infographic regarding verifiable parental consent (VPC) under COPPA, which was informed by research and insights from parents, COPPA experts, industry leaders, and other stakeholders. The white paper and infographic highlight key friction points that emerge in the VPC process, including:

Age verification requires additional data collection: 

As written, Utah’s proposed legislation would require companies to affirmatively verify the age of all Utah residents. A key pillar of privacy legislation and best practices is the principle of data minimization and not collecting information beyond what is necessary to provide a service. Requiring social media companies or their agents to collect this data would increase the risk of identity theft resulting from a data breach. We also note that since some social media companies are based outside of the United States (with some located in jurisdictions that have few effective privacy rules), there is an inherent security risk in the increased collection of sensitive data for age verification purposes.

Additionally, as written, Utah’s proposed legislation specifies that ages must be verified without a definition of what “verify” means. Companies would benefit from clarity on whether age verification or age estimation is required. An example of age estimation might include capturing a “selfie” of a user to estimate the user’s age range. Verifying someone’s exact age almost always  requires increased data collection compared with estimating an age range or age bracket. Some of the current age estimation technology can accurately distinguish a 12 year old from someone over 25, resulting in a much smaller number of users that would be required to provide sensitive identity documentation. Although methods of verification and forms or methods of identification will be established by further administrative rulemaking, compliance with the proposed legislation as written may still necessitate companies to require government-issued ID to access their services.

Protecting children and teens online should include increasing privacy protections: 

FPF knows that children and teens deserve privacy protections and has highlighted Utah’s leadership in this space, notably in the education context. However, a one-size-fits-all approach may not be appropriate given developmental differences between young children and teens. Similar to how children under 13 can access services with safeguards under COPPA, teens stand to derive benefit from online services such as socializing with peers, distant family, and communities. Utah’s legislation proposes to restrict access to services rather than enhancing privacy protections on these services. Enhanced privacy protections could not only benefit children, but could benefit adults  as well. Because many parents may ultimately choose to provide consent, it is important to consider how privacy protections could be implemented on online services.

These youth-focused proposals follow last year’s passage of the Utah Consumer Privacy Act – a comprehensive privacy law that created some new rights for Utah residents,  but provides fewer protections than other state frameworks. Adding privacy protections for young people would not just help Utah align with other states but also would address several of the privacy risks the social media bills would create. Examples of privacy protective provisions include:

FTC Requires Algorithmic Disgorgement as a COPPA Remedy for First Time

On March 4, the Federal Trade Commission (FTC) and Department of Justice (DOJ) announced a settlement agreement with WW International and its subsidiary, Kurbo (Kurbo by WW), after charging the companies with violating the Children’s Online Privacy Protection Act (COPPA) for improperly collecting health information and other data from children as young as eight years old. Among other penalties, the settlement requires the deletion of all “affected work product”–which includes algorithms–resulting from the companies’ collection of children’s data. Significantly, this is the first time that the FTC has imposed an algorithmic disgorgement penalty in a COPPA enforcement action, a measure that reflects the Commission’s increasing focus on algorithmic fairness.

The COPPA Claim

Aimed at protecting children under 13, COPPA applies to online service providers and commercial websites that (1) are directed to children and collect, use, or disclose children’s personal information; (2) have actual knowledge that children use the provider’s online service; or (3) provide a third party service to websites or online service providers that collect information from children. Among other requirements, services subject to COPPA must:

  1. Give parental notice before collecting, using, or disclosing child data;
  2. Make reasonable efforts to ensure parents receive direct notice of the collection, use, or disclosure of child data; 
  3. Obtain verifiable parental consent (VPC) before the collection, use, or disclosure of child data; and 
  4. Retain child data no longer than necessary to further the purpose for which the provider collected the information.

Here, the FTC and DOJ alleged that Kurbo by WW, a service marketed to children under 13, failed to provide adequate notice and obtain VPC before collecting personal information including weight, height, food intake, and physical activity. Specifically, the agencies argued that the measures Kurbo by WW did take, such as an age gate, were insufficient under the rule, and even incentivized children to lie about their birthdate to circumvent the measures. Moreover, the agencies alleged Kurbo by WW retained child data indefinitely, and would only delete upon parent request. The settlement imposes multiple remedies, including injunctive relief, monetary fines, engaging in compliance reporting, and, significantly, a requirement that Kurbo by WW delete all work product resulting from the collection of children’s personal information.

The Significance of This Settlement

The significant part of this settlement is the algorithmic disgorgement penalty: the requirement that the companies delete all algorithms resulting–in part or in whole–from the inappropriate collection of children’s data. The FTC imposed this penalty for the first time in 2019 in a final order against Cambridge Analytica. The agency used the remedy again in the 2021 Everalbum settlement, in which the developers of a photo app were required to delete facial recognition algorithms developed through training on data that was improperly collected. In a significant next step, this is the first time we have seen the agency impose the penalty in a COPPA settlement. Like monetary fines, compliance reporting, and other injunctive relief, algorithmic disgorgement is a measure intended to deter companies from improperly collecting and retaining child data. However, the penalty goes a step further than other COPPA remedies by preventing companies from benefiting from the improperly collected data in the future. In the FTC’s press release for the settlement, FTC Chair Lina Khan remarked, “Our order against these companies requires them to delete their ill-gotten data, destroy any algorithms derived from it, and pay a penalty for their lawbreaking.” This strong language from the FTC Chair signals an interest in doing more to hold companies subject to COPPA accountable.

Recently, child privacy has become a trending topic for both policymakers and enforcement agencies. Historically, the FTC tends to bring only a few COPPA cases per year, but the Kurbo by WW settlement marks the FTC’s second COPPA settlement in just three months. Time will tell whether COPPA enforcement actions become more frequent in the wake of increasing calls to protect children’s privacy. Regardless, this settlement stands to impact future COPPA enforcement by setting a new precedent for the penalties the FTC is willing to impose on companies. It also raises important questions about how companies can obtain effective VPC, an issue FPF’s Youth & Education team is exploring in our report on The State of Play: Verifiable Parental Consent and COPPA. Companies with child audiences should pay close attention to this settlement and its penalties, and ensure their practices are complying with COPPA.

Additional Resources:

FTC Blog Post on the Kurbo by WW settlement for Businesses 

For more on COPPA and VPC, see FPF’s Work on Verifiable Parental Consent (VPC) at thestateofplay.org