Minding Mindful Machines_ AI Agents and Data Protection Considerations
M in d in g Min d fu l Mach in es: A I Agen ts an d Data P ro te ctio n Con sid era tio n s A pril 20 25 D an ie l Berric k A bou t FP F T he Fu tu re of Priv a cy Fo […]
Potential Harms And Mitigation Practices for Automated Decision-making and Generative AI
A POTENTIAL HARMS AND MITIGATION PRACTICES for Automated Decision-making and Generative AI APRIL 2025 FPF | AI | POTENTIAL HARMS AND MITIGATION PRACTICES FOR AUTOMATED DECISIONMAKING AND GENERATIVE AI | APRIL 2025 All FPF materials that are released publicly are free to share and adapt with appropriate attribution. Learn more. U P DAT E D […]
Potential Harms And Mitigation Practices for Automated Decision-making and Generative AI
FPF has updated our 2017 resource (“Distilling the Harms of Automated Decision-making”) with the goal of identifying and categorizing a broad range of potential harms that may result from automated decision-making, including heightened harms related to generative AI (GenAI), and potential mitigation practices. FPF reviewed leading books, articles, and other literature on the topic of […]
FPF AI Harms Charts Only R2
Potential Harms of Automated Decision-making Illegal/Unlawful Represents actions that are illegal under several civil rights laws, which generally protect core classifications — such as race, gender, age, and ability — against discrimination, disparate treatment, and disparate impact. Unfair Represents actions that are typically legal, but nonetheless trigger notions of unfairness. Like the “illegal” category, some […]
FPF AI Harms R5
A POTENTIAL HARMS AND MITIGATION PRACTICES for Automated Decision-making and Generative AI APRIL 2025 FPF | AI | POTENTIAL HARMS AND MITIGATION PRACTICES FOR AUTOMATED DECISIONMAKING AND GENERATIVE AI | APRIL 2025 All FPF materials that are released publicly are free to share and adapt with appropriate attribution. Learn more. U P DAT E D […]
Chatbots in Check: Utah’s Latest AI Legislation
With the close of Utah’s short legislative session, the Beehive State is once again an early mover in U.S. tech policy. In March, Governor Cox signed several bills related to the governance of generative Artificial Intelligence systems into law. Among them, SB 332 and SB 226 amend Utah’s 2024 Artificial Intelligence Policy Act (AIPA) while HB 452 establishes new regulations for […]
FPF Publishes Infographic, Readiness Checklist To Support Schools Responding to Deepfakes
FPF released an infographic and readiness checklist to help schools better understand and prepare for the risks posed by deepfakes. Deepfakes are realistic, synthetic media, including images, videos, audio, and text, created using a type of Artificial Intelligence (AI) called deep learning. By manipulating existing media, deepfakes can make it appear as though someone is […]
Chatbots in Check: Utah’s Latest AI Legislation
With the close of Utah’s short legislative session, the Beehive State is once again an early mover in U.S. tech policy. In March, Governor Cox signed several bills related to the governance of generative Artificial Intelligence systems into law. Among them, SB 332 and SB 226 amend Utah’s 2024 Artificial Intelligence Policy Act (AIPA) while […]
FPF-Deep Fake_illo03-FPF-AI
% PRINCIPAL UPDATE OFFICE SNOW DAY May 13 No School Student Cash Assist FREE REMINDER NO SCHOOLfor Students Professional Learning Day PRINCIPAL UPDATE Readiness Checklist Educate and train students, staf, and parents about the impacts and consequences of deepfakes Keep current with laws and regulations and understand how laws apply to sharing student information, even when that information may be AI-generated When investigating incidents, consider that any digital media could be a deepfake and reliable detection is difcult Engage in open discussion on how to address potential incidents Establish communication protocols around what to communicate and to whom Provide support to the impacted individuals, and consider confdentiality and privacy of all parties when investigating and communicating about an incident Be aware of personal and community biases and norms Determine how existing policies and practices might apply, including policies on bullying, harassment, Title IX, sexting, technology use, disruption of school, misconduct outside of school, and impersonation of others on social media Update current policies and procedures to ensure they address deepfakes, including image-based sexual abuse and harassment Consider implementing policies for third-party vendors on how they should address a deepfake incident Evaluate if a sexually explicit deepfake incident qualifes as a form of sexual harassment Consult with legal counsel Work with local law enforcement to establish incident thresholds and response responsibilities Establish an after-action review process Deepfakes can be created using commonly available online tools or proprietary programs and are increasingly sophisticated and hard to detect. While some jurisdictions have started to create requirements for transparency or authentication for all types of synthetic content, including deepfakes, those requirements do not yet have substantial reach. School leaders must be vigilant in addressing its potential impacts by identifying and mitigating potential harms while navigating this evolving challenge. Recent incidents […]