MythBusters: COPPA Edition

Following YouTube’s September settlement with the Federal Trade Commission (FTC) regarding the Children’s Online Privacy Protection Act (COPPA), YouTube released a video in late November explaining upcoming changes to their platform. The YouTube creator community responded in large numbers, with numerous explainer videos and almost two hundred thousand comments filed in response to the FTC’s just-closed call for comments on the COPPA rule. Some responses have been insightful and sophisticated. Others have indicated confusion and misunderstanding of COPPA’s requirements and scope. This blog addresses some of the most common myths circulating in YouTube videos, tweets, and Instagram posts.

Fact 1: You can’t “stop COPPA” by filing comments with the FTC

Some COPPA explainer videos call for viewers to write comments to the FTC asking them to “stop COPPA.” There are two problems with this call to action. 

The recent changes to YouTube occurred because the platform agreed to modify some of its practices (and pay a fine) rather than go to court against the FTC. The major change for creators involves the addition of the “made for kids” flag: all creators are required to select whether their channel or a specific video is created for kids or for a more general audience. The flag tells YouTube when they can and cannot place targeted advertising (which isn’t allowed for kids under COPPA). The settlement did not require the creation of an algorithm to look for misflagged content or the changes in viewer functionality for “made for kids” videos. YouTube did this, and the FTC can’t change the platform back to the way it was.

That doesn’t mean that creators shouldn’t engage with the FTC. The current FTC comment period will be over, but the process of creating new COPPA rules will likely take years, and there will be more opportunities to be heard. The FTC needs to know which data creators have about their users and how that data drives decision-making. Tell them about your income from advertisements, your level of control over the advertisements on your channel, and what you wish you knew about advertising on YouTube. The FTC needs to understand your business model and the business of content creators on platforms in general. So, tell them about how information from YouTube informs the content on your Twitch stream, whether it changes how you use your Patreon, or influences merchandising decisions. The best way to get useful rules that protect children’s privacy while supporting content creation on the internet is if the FTC knows how this portion of the economy works.

A note about commenting: be polite. When you make comments on a government website, a real person will read them. So don’t use all caps or profanity, and write in complete sentences. The point is not to fight the FTC but to inform them of your concerns. While asking your viewers to comment can be useful, make sure that you remind them to be kind as well. Civil, smart comments are more likely to have a positive influence.

Fact 2: COPPA is not infringing on First Amendment rights

COPPA regulates the collection, use, and sharing of data created by children. COPPA does have specific obligations for data collected from websites or online services directed to children, but those obligations do not dictate the content you can or cannot create. COPPA may make it more difficult to profit from child-directed content, but this does not mean that anyone’s rights have been infringed. YouTube’s Terms of Service and platform design have more influence than any American law over the kind of content users can create.

Fact 3: Not everything is child-directed

YouTube’s video explaining COPPA-related changes to the platform includes a list of factors that can indicate whether a video or channel is made for kids.

Screenshot of YouTube video saying "Made for Kids - Factors to Consider: The Subject Matter of the Video; If Children are the intended audience; if it includes child actors or models; if it includes characters, celebrities, or toys that appeal to children; if it uses language that is meant for children to understand; if it includes activities that appeal to children; if it includes songs, stories or poems that appeal to children.

This has caused some confusion. Just because a channel or video may check one or two of these boxes does not mean that it is child-directed. The FTC said as much in its blog post (if you have not read it yet, please do). The blog was written specifically to answer questions about what constitutes “child-directed” content.

If you’re still confused about whether your content is child-directed, ask yourself, “Who am I speaking to?” When you create a video, you should have an idea of who will watch it. And humans use different communication strategies for different kinds of people. For example, think about sitcoms created for major TV networks like CBS, NBC, or ABC versus sitcoms created for the Disney Channel or Nickelodeon. While the two types share similarities, including the format, some of the situations, and the presence of a laugh track, there are major differences that reflect the different audiences. Some sitcoms created for children have simpler language, vivid costumes, or include extensive over-acting (for a visual explanation, watch this clip from SNL). These features make the content more engaging for children. Think about your content. Where does it belong: the Disney Channel or ABC?

After you decide whether your channel or individual video is made for kids, write down why you made that decision. There are two reasons for doing this. First, if the FTC contacts you or if YouTube changes the flag on your video, you will have already prepared your response. You won’t have to try to create responses based on decisions you may have made months or even years earlier. Second, and more importantly, this will keep your actions consistent. As you continue to work and create new videos, you may not remember why you said one video was made for kids while another wasn’t. This not only creates a record of your decision-making process but should make it easier in the long run.

Fact 4: A COPPA violation probably won’t bankrupt your channel

The FTC has stated that when it determines fines for violations, it considers “a company’s financial condition and the impact a penalty could have on its ability to stay in business.” The FTC’s mission is not to put people out of business but to protect consumers. They have limited staff and limited resources; targeting small channels or channels about which a reasonable person could disagree whether the content is child-directed is not typically a good use of their time. But this does not excuse creators from reviewing videos and flagging content as appropriate. You must comply with COPPA, but if you make a mistake, the FTC’s likely first action would be to ask you to change your flag, rather than impose a large fine.

A final piece of advice: do not panic. Panic won’t help. Take a deep breath, review your channel, and stay informed about any other changes that YouTube may announce.

Increased Surveillance is Not an Effective Response to Mass Violence

By Sara Collins and Anisha Reddy

This week, Senator Cornyn introduced the RESPONSE Act, an omnibus bill meant to reduce violent crimes, with a particular focus on mass shootings. The bill has several components, including provisions that would have significant implications for how sensitive student data is collected, used, and shared. The most troubling part of the proposal would broaden the categories of content schools must monitor under the Children’s Internet Protection Act (CIPA); specifically, schools would be required to “detect online activities of minors who are at risk of committing self-harm or extreme violence against others.” 

Unfortunately, the proposed measures are unlikely to improve school safety; there is little evidence that increased monitoring of all students’ online activities would increase the safety of schoolchildren, and technology cannot yet be used to accurately predict violence. The monitoring requirements would place an unmanageable burden on schools, pose major threats to student privacy, and foster a culture of surveillance in America’s schools. Worse, the RESPONSE Act mandates would reduce student safety by redirecting resources away from evidence-based school safety measures.

More Untargeted Monitoring is not the Answer

About 95% of schools are required to create internet safety policies under CIPA (these requirements are tied to schools’ participation in the “E-rate” telecommunications discount program). CIPA requires safety policies to include technology that monitors, blocks, and filters students’ attempts to access inappropriate online content. CIPA generally imposes monitoring requirements regarding: obscene content; child pornography; and content that is otherwise harmful to minors. 

The RESPONSE Act would impose new obligations, requiring schools to infer whether a students’ internet use might indicate they are at risk of committing self-harm or extreme violence against others. However, there is little evidence that detecting or blocking this kind of content is technically possible and would prevent physical harm. A report on school safety technology funded by the U.S. Department of Justice noted that violence prediction software is “immature technology.” Not only is the technology immature, the FBI found that there is no one profile for a school shooter: scanning student activity to look for the next “school shooter” is unlikely to be effective. 

By directing schools to implement “technology protection measure[s] that detect online activities of minors who are at risk of committing self-harm or extreme violence against others,” the RESPONSE Act would essentially require that all schools across the nation implement some form of comprehensive network or device monitoring technology to scan lawful content–a direct violation of local control and a serious invasion of students’ privacy. 

This broad language could encourage schools to collect as much information as possible about students, requiring already overwhelmed faculty and administrators to spend countless hours sifting through contextually harmless student data–hours that could be better spent engaging with students directly.

Additionally, this technology mandate could limit schools’ ability and desire to implement more thoughtful and effective programs and policies designed to improve school safety. Schools may assume that network monitoring technology is more effective than it actually is, and redirect resources away from evidence-based school safety measures, such as holistic approaches to early intervention. Further, without more guidance, school administrators would be forced to make judgement calls that result in the over-monitoring of student online activity.

The cost associated with the implementation of these technologies goes beyond buying appropriate network monitoring software, which is a burden in and of itself. Schools⁠—which are under-resourced and under-staffed⁠—would experience difficulty devoting funds and staff time to monitoring these alerts, as well as developing policies for responses to those alerts. These burdens are further compounded in rural school districts that already receive less funding per student. 

False Alerts Unjustly Trap Students in the Threat Assessment Process

In some cases, network monitoring does not end when the school day ends. Schools often issue devices for students to take home or online accounts students access from a device at home. Under the RESPONSE Act, these schools would be forced to monitor students constantly. If a school gets an alert during non-school hours, their default action may be to alert law enforcement. But sending law enforcement to conduct wellness checks is not a neutral action. These interactions can be traumatic for students and families, and can result in injury or false imprisonment. These harms are exacerbated when monitoring technology provides overwhelming numbers of false positives. 

Even if content monitoring technology were effective, the belief that surveillance has no negative outcomes or consequences for students has created a pernicious narrative. Surveillance technologies, like device, network, or social media monitoring services, can harm students by stifling their creativity, individual growth, and speech. Constant surveillance also conditions students to expect and accept that authority figures, such as the government, will always monitor their activity. We also know that students of color and students with disabilities are disproportionately suspended, arrested, and expelled compared to white students and non-disabled students. The RESPONSE Act’s proposed new requirements would only serve to further exacerbate this disparity. 

Schools, educators, caregivers, and communities are in the best position to notice and address concerning student behavior. The Department of Education has several resources outlining effective disciplinary measures in schools, finding that “[e]vidence-based, multi-tiered behavioral frameworks . . . can help improve overall school climate and safety.”

Ultimately, requiring schools to spend money on ineffective technology would divert much-needed resources and staff from providing students with a safe learning environment. Rather than focusing on filtering content, schools should emphasize the importance of safe and responsible internet use and use school safety funding on evidence-based solutions. By doing so, administrators can create a school community built on trust rather than suspicion.

FTC Reaches Landmark Settlement Regarding Kids’ Privacy, Clarifies Platforms’ and Video Creators’ COPPA Obligations for Child-Directed Content

By Sara Collins

Last week the Federal Trade Commission (FTC) released details of a settlement with YouTube under the Children’s Online Privacy Protection Act (COPPA). Although notable for its landmark monetary penalty, the settlement is probably more important for the other requirements that it places on YouTube and content creators. Some of YouTube’s settlement obligations exceed COPPA’s statutory and regulatory requirements. Under the settlement:

According to a public statement, YouTube will disable personalized advertising (i.e. behavioral advertising), commenting, and public playlist sharing on content identified as “child-directed.” While not required by the settlement, YouTube will also deploy a machine-learning algorithm to identify videos that are in fact child-directed, but have not been self-identified by content creators.

Basis for the Settlement

The FTC concluded that YouTube had actual knowledge that many channels hosting content on its platform are “child-directed” and therefore trigger COPPA obligations for the company. Google and its subsidiary YouTube will pay a record $170 million to settle allegations by the Commission and the New York Attorney General that the YouTube video sharing service illegally collected personal information from children without their parents’ consent. The settlement includes a monetary penalty of $136 million to the FTC and $34 million to New York State. The $136 million penalty is by far the largest amount the FTC has ever obtained in a COPPA case since Congress enacted the law in 1998; the previous record was $5.7 million against Music.ly/TikTok.

Actual Knowledge

The settlement hinges on COPPA’s “actual knowledge” criteria. COPPA imposes obligations on online services that have actual knowledge that they are collecting, using, or disclosing personal information from children under 13. Services with actual knowledge of such data collection must implement privacy safeguards, including: obtaining verified parental consent to collection of personal information from children; providing parents with access to children’s data retained by the company; and publishing enhanced transparency regarding data practices.

The settlement characterizes YouTube as a third-party operator, rather than a platform. According to the FTC’s COPPA FAQ, third-party operators are deemed to have actual knowledge that children are using a service if:

  1. a child-directed content provider (which is strictly liable for any collection) directly communicates the child-directed nature of its content to the third-party operator; or

  2. a representative of the third-party operator’s ad network recognizes the child-directed nature of the content.

YouTube meets both prongs of this test. The complaint details Google’s knowledge that the YouTube platform hosts numerous child-directed channels. YouTube marketed itself as a top destination for kids in presentations to the makers of popular children’s products and brands. For example, Google and YouTube told Mattel, maker of Barbie and Monster High toys, that “YouTube is today’s leader in reaching children age 6-11 against top TV channels” and told Hasbro, which makes My Little Pony and Play-Doh, that YouTube is the “#1 website regularly visited by kids.” Several channel owners also told YouTube and Google that their channels’ content was directed to children. In other instances, YouTube’s own content rating system identified content as directed to children. 

This set of facts, combined with those from the Musical.ly settlement, puts companies on notice that their interactions with the public, board members, potential advertising partners, and content creators are all relevant factors when the FTC determines whether  platforms have actual knowledge that they collect information from children under the age of thirteen.

Content Creators = Operators

Chairman Simons’ and Commissioner Wilson’s statement notes that individual channels on a general audience platform are “website[s] or online service[s]” and, therefore, each individual creator who operates a channel is “on notice that [the Commission] consider[s] them to be standalone ‘operators’ under COPPA.” This means that YouTube creators may face strict liability for COPPA non-compliance — likely a shock to most creators as they do not control the information collected by YouTube, contract with advertisers, and cannot negotiate how revenue is split. However, because creators can choose whether or not to allow targeted advertising on their channels, the Commission construes YouTube as collecting personal information from children on behalf of individual creators. This raises economic and compliance challenges for creators, because YouTube warns creators that opting out of targeted advertising “may significantly reduce [the] channel’s revenue.” Channel owners who publish child-directed content will likely face a choice: either eschew targeted advertising and the associated data collection, or obtain verified parental consent from parents of under-thirteen viewers to collect personal information from their children.

The statement also announces that the FTC will conduct sweeps of channels following YouTube’s implementation of the order’s provisions. Because COPPA applies to commercial entities (defined broadly to include anyone who makes money from an enterprise that is not a governmental body or non-profit), monetized channels will need to be aware of potential COPPA implications. 

YouTube also stated that the platform would voluntarily implement a machine-learning algorithm to identify child-directed content that is not self-designated by content creators. We expect that this will be similar to other algorithmic screeners, such as those used for DMCA compliance. Critics have charged that YouTube’s algorithms are biased, opaque, and inaccurate. It will be challenging for the platform to algorithmically determine what content is or is not child directed; this analysis will necessarily be nuanced and complex. The analysis is also likely to be subjective. Some YouTube content is plainly child-directed, some channels are clearly not child-directed, and reasonable people can and will disagree about a diverse range of content in between. Any algorithm will be imperfect, creating the risk that the assessments will be both over- and under-inclusive when determining what is and is not child-directed content. 

One probable issue: videos that are appropriate for children (but directed to adults or general audiences) can be inadvertently flagged as directed toward children. The Commission’s Complaint notes that YouTube videos were manually reviewed prior to being featured on the homepage of the YouTube Kids app — a separate service specifically designed for kids, and distinct from the general audience YouTube service. The Commission did not detail what the review entailed; however, one could imagine reviewers making determinations based on the content’s appropriateness for children rather than assessing the content to see if it was specifically directed toward children. For example, a David Attenborough video about sharks might be perfectly acceptable on the homepage of the YouTube Kids app, but on the general audience YouTube service, it probably would not be considered child-directed. Conflating child-appropriate content and child-directed content is common. Machine-learning algorithms may mitigate or exacerbate this problem. The implications for creators of children’s media (and child-adjacent media) are uncertain, but, at a minimum, creating content for kids will likely be less lucrative.

Targeted Advertising vs. Internal Operations

Several aspects of the settlement hinge on the Commission’s interpretation of COPPA’s “internal operations” exception – a provision that permits operators to collect persistent identifiers from children without parental consent if the identifiers are only used for internal operations – e.g. personalizing content, protecting user security, or ensuring legal compliance. The settlement reinforces the FTC’s interpretation that, in the context of child-directed content, targeted advertising without verifiable parental consent is prohibited by COPPA. While this position is not necessarily controversial — the DMA, IAB, NAI, and most industry leaders agree — it is useful for the FTC to reiterate this point.

The “internal operations” exception also drives the Commissioners’ civil penalty analysis. Commissioner Chopra notes that YouTube used viewing data to enhance the effectiveness of its recommendation engines, and that the value of that enhancement should have been considered when negotiating the ultimate monetary penalty of the settlement. Chairman Simons and Commissioner Wilson disagree, characterizing recommendation engine data as being used within the bounds of the “internal operations” exception to obtaining parental consent. They argue that “obtaining penalties in this matter based on the argument that enhancement of Google’s other products and services through analytics such as page views, time spent on a video, or algorithms for recommending videos is ill-gotten, is highly speculative.” The FTC has requested comment regarding the appropriate boundaries of the “internal operations” exemption, and the subject is likely to be a focus of the upcoming FTC COPPA workshop.