FPF Publishes Infographic, Readiness Checklist To Support Schools Responding to Deepfakes
Today, the Future of Privacy Forum (FPF) released an infographic and readiness checklist to help schools better understand and prepare for the risks posed by deepfakes. Deepfakes are realistic, synthetic media, including images, videos, audio, and text, created using a type of Artificial Intelligence (AI) called deep learning. By manipulating existing media, deepfakes can make it appear as though someone is doing or saying something that they never actually did.
Deepfakes, while relatively new, are quickly becoming prevalent in K-12 schools. Schools have a responsibility to create a safe learning environment, and a deepfake incident – even if it happens outside of school – poses real risks to that, including through bullying and harassment, the spread of misinformation and disinformation, personal safety and privacy concerns, and broken trust.
FPF’s infographic describes the different types of deepfakes – video, text, image, and audio – and the varied risks and considerations posed by each in a school setting, from the potential for fabricated phone calls and voice messages impersonating teachers to sharing forged, non-consensual intimate imagery (NCII).
“Deepfakes create complicated ethical and security challenges for K-12 schools that will only grow as the technology becomes more accessible and sophisticated, and the resulting images harder to detect,” said Jim Siegl, Senior Technologist with FPF’s Youth & Education Privacy team. “Schools should understand the risks, their responsibilities and protocols in place to respond, and how they will protect students, staff, and administrators while addressing an incident.”
FPF has also developed a readiness checklist to support schools in assessing and preparing response plans. The checklist outlines a series of considerations for school leaders, from the need for education and training to determining how existing technology, policies, and procedures might apply to engaging legal counsel and law enforcement.
The infographic maps out the various stages of a school’s response to an example scenario – a student reporting that they received a sexually explicit photo of a friend and that the image is circulating among a group of students – inviting school leaders to consider the following:
- How can your school leverage internal investigative tools or processes used for other technology violations?
- What process does your school use to reduce distribution, ensure the privacy of all students involved in the investigation, and provide appropriate support to the targeted individual?
- How might the potential of a deepfake impact the investigation and response?
- What policies and procedures does your school have that may apply?
- What policies does your school have to ensure students’ privacy and minimize reputational harm when communicating?
As an additional resource for school leaders and policymakers navigating the rapid deployment of AI and related technologies in schools, FPF has developed an infographic highlighting its varied use cases in an educational setting. While deepfakes are a new and evolving challenge, edtech tools using AI have been in schools for years.