Showing results for by2024 bucket busin byp brom broke bindinge feed jspivack fpf org fpf org fpf.org
FPF AI Harms Charts Only R2
[…] or disparate impact of certain protected groups » Configuring AI tools with alternative review procedures for individuals who legally require reasonable accommodations POST-DEPLOYMENT & EVALUATION » Internal business processes to index concerns; ethical frameworks & best practices to monitor and evaluate outcomes » Regular review of high-risk systems to ensure the system does not […]
FPF AI Harms R5
[…] GENERATIVE AI | APRIL 2025 2 Description of Tables Potential Harms of Automated Decision-making This table groups the harms identified in the literature into four broad ” buckets” — loss of opportunity, economic loss, social detriment, and loss of liberty or life — to depict the various spheres of life where automated decision-making can […]
Chatbots in Check: Utah’s Latest AI Legislation
With the close of Utah’s short legislative session, the Beehive State is once again an early mover in U.S. tech policy. In March, Governor Cox signed several bills related to the governance of generative Artificial Intelligence systems into law. Among them, SB 332 and SB 226 amend Utah’s 2024 Artificial Intelligence Policy Act (AIPA) while HB 452 establishes new regulations for […]
FPF Publishes Infographic, Readiness Checklist To Support Schools Responding to Deepfakes
FPF released an infographic and readiness checklist to help schools better understand and prepare for the risks posed by deepfakes. Deepfakes are realistic, synthetic media, including images, videos, audio, and text, created using a type of Artificial Intelligence (AI) called deep learning. By manipulating existing media, deepfakes can make it appear as though […]
Chatbots in Check: Utah’s Latest AI Legislation
[…] mental health chatbots, defined as an AI technology that uses generative AI to engage in conversations that a reasonable person would believe can provide mental health therapy. Business Obligations: Suppliers of mental health chatbots must refrain from advertising any products or services during user interactions unless explicitly disclosed. Suppliers are also prohibited from the […]
FPF-Deep Fake_illo03-FPF-AI
[…] your school have that may apply? Community dynamics are considered when constructing any public communication regarding the incident; all communication is consistent and mindful of privacy impacts. What processes does your school have to ensure the privacy of students and minimize harm when communicating? Real-World Example Deepfake non-consensual intimate imagery (NCII) can be generated by face-swapping, replacing one person’s face with another’s face, or digitally “undressing” a clothed image to appear nude. In the case where NCII involves minors, it may also be considered Child Sexual Abuse Material (CSAM). These deepfakes raise many of the same issues as non-synthetic NCII and CSAM, though potential ofenders may not appreciate the serious, criminal implications. While many of these deepfakes may be created and shared outside of school, schools are required to address of-campus behavior that creates a “hostile environment” in the school. Consider how your school would respond to the below incident as it unfolds. For more resources visit studentprivacycompass. org/deepfakes IMAGE TThese forgeries can convincingly alter or create static images of people, objects, or settings that are entirely or partially fabricated. Methods like face swapping, morphing, and style transfer are often used. AUDIO By mimicking vocal traits, audio deepfakes can convincingly replicate a person’s voice. They can be used to fabricate phone calls, voice messages, or public addresses. DEEPFAKES PRINCIPAL UPDATE
FPF Publishes Infographic, Readiness Checklist To Support Schools Responding to Deepfakes
[…] happens outside of school – poses real risks to that, including through bullying and harassment, the spread of misinformation and disinformation, personal safety and privacy concerns, and broken trust. FPF’s infographic describes the different types of deepfakes – video, text, image, and audio – and the varied risks and considerations posed by each in […]
FPF-Sponsorship-Prospectus-Singles-DC-Privacy-Forum
Please contact sponsorship@ fpf.org for more information. NETWORKING LUNCH SPONSOR • $ 7, 5 0 0 • 1 available »Company name and logo included in schedule of events with recognition “Lunch brought to you by [Your company name]” »Company name and logo displayed on signage at Luncheon »Company logo on event webpage with link, […]
FPF-2025-Sponsorship-Prospectus
[…] that brings together industry, academics, civil society, policymakers, and other stakeholders to explore the challenges posed by technological innovation and develop privacy protections, ethical norms, and workable business practices. We are an independent and pragmatic voice for privacy regulation and take on the tough issues of integrating privacy protections with responsible data use. We […]
FPF-Deep-Fake_2025 (1)
[…] apply? Community dynamics are considered when constructing any public communication regarding the incident; all communication is consistent and mindful of privacy impacts. What processes does your school have to ensure the privacy of students and minimize harm when communicating? Real-World Example Deepfake non-consensual intimate imagery (NCII) can be generated by face-swap ping, replacing one person’s face with another’s face, or digitally “undressing” a clothed image to appear nude. In the case where NCII involves minors, it may also be considered Chil d Sexual Abuse Material (CSAM). These deepfakes raise many of the same issues as non-synthetic NCII and CSAM, though potential ofenders may not appreciate the serious, crimina l implications. While many of these deepfakes may be created and shared outside of school, schools are required to address of-campus behavior that creates a “hostile environment” in the school. Consider how your school would respond to the below incident as it unfol ds. For more resources visit studentprivacycompass. org/deepfakes IMAGE TThese forgeries can convincingly alter or create static images of people, objects, or settings that are entirely or partially fabricated. Methods like face swapping, morphing, and style transfer are often used. AUDIO By mimicking vocal traits, audio deepfakes can convincingly replicate a person’s voice. They can be used to fabricate phone calls, voice messages, or public addresses. DEEPFAKES PRINCIPAL UPDATE