A School Administrator’s Guide to Addressing Deepfakes
Generative AI can pose significant ethical and security challenges for schools, as it can be used to spread misinformation, perpetrate fraud, target students and staff, and undermine trust. Generative AI enables the widespread creation of AI-generated or “synthetic” media. which is increasingly indistinguishable from human-generated or “authentic” content. Perhaps the most concerning type of synthetic content in schools is the creation of “deepfakes.” Deepfakes are realistic, synthetic media created or manipulated using artificial intelligence (AI). Deepfakes can be images, videos, audio, or text. By manipulating existing media, deepfakes can seamlessly alter the appearance and voice of individuals, making it appear as though they are saying or doing something they did not.
The Future of Privacy Forum’s Deepfakes in School: Risks and Readiness infographic provides a concise overview of this issue, outlining the technology’s capabilities and risks. Building on that foundation, this toolkit delves deeper into the broader implications for schools, offering practical guidance on identifying, responding to, and mitigating the harms posed by deepfakes. It includes scenarios to facilitate critical conversations, considerations related to data privacy, legal, and ethical challenges, and an overview of the evolving legal landscape. Additionally, a series of checklists supports school leaders in developing proactive strategies and ensuring appropriate responses. As a comprehensive resource, this toolkit equips school leaders with the necessary tools to navigate deepfake incidents while maintaining ethical and legal integrity.