Increased Surveillance is Not an Effective Response to Mass Violence
By Sara Collins and Anisha Reddy
This week, Senator Cornyn introduced the RESPONSE Act, an omnibus bill meant to reduce violent crimes, with a particular focus on mass shootings. The bill has several components, including provisions that would have significant implications for how sensitive student data is collected, used, and shared. The most troubling part of the proposal would broaden the categories of content schools must monitor under the Children’s Internet Protection Act (CIPA); specifically, schools would be required to “detect online activities of minors who are at risk of committing self-harm or extreme violence against others.”
Unfortunately, the proposed measures are unlikely to improve school safety; there is little evidence that increased monitoring of all students’ online activities would increase the safety of schoolchildren, and technology cannot yet be used to accurately predict violence. The monitoring requirements would place an unmanageable burden on schools, pose major threats to student privacy, and foster a culture of surveillance in America’s schools. Worse, the RESPONSE Act mandates would reduce student safety by redirecting resources away from evidence-based school safety measures.
More Untargeted Monitoring is not the Answer
About 95% of schools are required to create internet safety policies under CIPA (these requirements are tied to schools’ participation in the “E-rate” telecommunications discount program). CIPA requires safety policies to include technology that monitors, blocks, and filters students’ attempts to access inappropriate online content. CIPA generally imposes monitoring requirements regarding: obscene content; child pornography; and content that is otherwise harmful to minors.
The RESPONSE Act would impose new obligations, requiring schools to infer whether a students’ internet use might indicate they are at risk of committing self-harm or extreme violence against others. However, there is little evidence that detecting or blocking this kind of content is technically possible and would prevent physical harm. A report on school safety technology funded by the U.S. Department of Justice noted that violence prediction software is “immature technology.” Not only is the technology immature, the FBI found that there is no one profile for a school shooter: scanning student activity to look for the next “school shooter” is unlikely to be effective.
By directing schools to implement “technology protection measure[s] that detect online activities of minors who are at risk of committing self-harm or extreme violence against others,” the RESPONSE Act would essentially require that all schools across the nation implement some form of comprehensive network or device monitoring technology to scan lawful content–a direct violation of local control and a serious invasion of students’ privacy.
This broad language could encourage schools to collect as much information as possible about students, requiring already overwhelmed faculty and administrators to spend countless hours sifting through contextually harmless student data–hours that could be better spent engaging with students directly.
Additionally, this technology mandate could limit schools’ ability and desire to implement more thoughtful and effective programs and policies designed to improve school safety. Schools may assume that network monitoring technology is more effective than it actually is, and redirect resources away from evidence-based school safety measures, such as holistic approaches to early intervention. Further, without more guidance, school administrators would be forced to make judgement calls that result in the over-monitoring of student online activity.
The cost associated with the implementation of these technologies goes beyond buying appropriate network monitoring software, which is a burden in and of itself. Schools—which are under-resourced and under-staffed—would experience difficulty devoting funds and staff time to monitoring these alerts, as well as developing policies for responses to those alerts. These burdens are further compounded in rural school districts that already receive less funding per student.
False Alerts Unjustly Trap Students in the Threat Assessment Process
In some cases, network monitoring does not end when the school day ends. Schools often issue devices for students to take home or online accounts students access from a device at home. Under the RESPONSE Act, these schools would be forced to monitor students constantly. If a school gets an alert during non-school hours, their default action may be to alert law enforcement. But sending law enforcement to conduct wellness checks is not a neutral action. These interactions can be traumatic for students and families, and can result in injury or false imprisonment. These harms are exacerbated when monitoring technology provides overwhelming numbers of false positives.
Even if content monitoring technology were effective, the belief that surveillance has no negative outcomes or consequences for students has created a pernicious narrative. Surveillance technologies, like device, network, or social media monitoring services, can harm students by stifling their creativity, individual growth, and speech. Constant surveillance also conditions students to expect and accept that authority figures, such as the government, will always monitor their activity. We also know that students of color and students with disabilities are disproportionately suspended, arrested, and expelled compared to white students and non-disabled students. The RESPONSE Act’s proposed new requirements would only serve to further exacerbate this disparity.
Schools, educators, caregivers, and communities are in the best position to notice and address concerning student behavior. The Department of Education has several resources outlining effective disciplinary measures in schools, finding that “[e]vidence-based, multi-tiered behavioral frameworks . . . can help improve overall school climate and safety.”
Ultimately, requiring schools to spend money on ineffective technology would divert much-needed resources and staff from providing students with a safe learning environment. Rather than focusing on filtering content, schools should emphasize the importance of safe and responsible internet use and use school safety funding on evidence-based solutions. By doing so, administrators can create a school community built on trust rather than suspicion.