An AI-based computer system can gather data and use that data to make decisions or solve problems – using algorithms to perform tasks that, if done by a human, would be said to require intelligence. The benefits created by AI and machine learning (ML) systems for better health care, safer transportation, and greater efficiencies across the globe are already happening. But the increased amounts of data and computing power that enable sophisticated AI and ML models raise questions about the privacy impacts, ethical consequences, fairness, and real world harms if the systems are not designed and managed responsibly. FPF works with commercial, academic, and civil society supporters and partners to develop best practices for managing risk in AI and ML and assess whether historical data protection practices such as fairness, accountability, and transparency are sufficient to answer the ethical questions they raise.
Featured
A Blueprint for the Future: White House and States Issue Guidelines on AI and Generative AI
Since July 2023, eight U.S. states (California, Kansas, New Jersey, Oklahoma, Oregon, Pennsylvania, Virginia, and Wisconsin) and the White House have published executive orders (EOs) to support the responsible and ethical use of artificial intelligence (AI) systems, including generative AI. In response to the evolving AI landscape, these directives signal a growing recognition of the […]
FPF and OneTrust Release Collaboration on Conformity Assessments under the proposed EU AI Act: A Step-by-Step Guide & Infographic
Today, the Future of Privacy Forum (FPF) and OneTrust released a collaboration on Conformity Assessments under the proposed EU AI Act: A Step-by-Step Guide and accompanying Infographic. Conformity Assessments are a key and overarching accountability tool introduced in the proposed EU Artificial Intelligence Act (EU AIA or AIA) for high-risk AI systems. Conformity Assessments are […]
FPF Statement on Biden-Harris AI Executive Order
The Biden-Harris AI plan is incredibly comprehensive, with a whole of government approach and with an impact beyond government agencies. Although the executive order focuses on the government’s use of AI, the influence on the private sector will be profound due to the extensive requirements for government vendors, worker surveillance, education and housing priorities, the […]
FPF Submits Comments to the FEC on the Use of Artificial Intelligence in Campaign Ads
On October 16, 2023, the Future of Privacy Forum submitted comments to the Federal Election Commission (FEC) on the use of artificial intelligence in campaign ads. The FEC is seeking comments in response to a petition that asked the Agency to initiate a rulemaking to clarify that its regulation on “fraudulent misrepresentation” applies to deliberately […]
Future of Privacy Forum and Leading Companies Release Best Practices for AI in Employment Relationships
Expert Working Group Focused on AI in Employment Launches Best Practices that Promote Non-Discrimination, Human Oversight, Transparency, and Additional Protections. Today, the Future of Privacy Forum (FPF), with ADP, Indeed, LinkedIn, and Workday — leading hiring and employment software developers — released Best Practices for AI and Workplace Assessment Technologies. The Best Practices guide makes […]
How Data Protection Authorities are De Facto Regulating Generative AI
The Istanbul Bar Association IT Law Commission published Dr. Gabriela Zanfir-Fortuna’s article, “How Data Protection Authorities are De Facto Regulating Generative AI,” in their August monthly AI Working Group Bulletin, “Law in the Age of Artificial Intelligence” (Yapay Zekâ Çağinda Hukuk). Generative AI took the world by storm in the past year, with services like […]
FPF Releases Generative AI Internal Policy Checklist To Guide Development of Policies to Promote Responsible Employee Use of Generative AI Tools
Today, the Future of Privacy Forum (FPF) releases the Generative AI for Organizational Use: Internal Policy Checklist. With the proliferation of employee use of generative AI tools, this checklist provides organizations with a powerful tool to help revise their internal policies and procedures to ensure that employees are using generative AI in a way that […]
Insights into Brazil’s AI Bill and its Interaction with Data Protection Law: Key Takeaways from the ANPD’s Webinar
Authors: Júlia Mendonça and Mariana Rielli The following is a guest post to the FPF blog by Júlia Mendonça, Researcher at Data Privacy Brasil, and Mariana Rielli, Institutional Development Coordinator at Data Privacy Brasil. The guest blog reflects the opinion of the authors only. Guest blog posts do not necessarily reflect the views of FPF. […]
Newly Updated Report: The Spectrum of Artificial Intelligence – Companion to the FPF AI Infographic
Today, we are re-releasing the report: The Spectrum of Artificial Intelligence – Companion to the FPF AI Infographic with new updates to account for the development and use of advanced generative AI tools. In December 2020, FPF published the Spectrum of Artificial Intelligence – An Infographic Tool, designed to visually display the variety and complexity […]
Unveiling China’s Generative AI Regulation
Authors: Yirong Sun and Jingxian Zeng The following is a guest post to the FPF blog by Yirong Sun, research fellow at the New York University School of Law Guarini Institute for Global Legal Studies at NYU School of Law: Global Law & Tech and Jingxian Zeng, research fellow at the University of Hong Kong […]