An AI-based computer system can gather data and use that data to make decisions or solve problems – using algorithms to perform tasks that, if done by a human, would be said to require intelligence. The benefits created by AI and machine learning (ML) systems for better health care, safer transportation, and greater efficiencies across the globe are already happening. But the increased amounts of data and computing power that enable sophisticated AI and ML models raise questions about the privacy impacts, ethical consequences, fairness, and real world harms if the systems are not designed and managed responsibly. FPF works with commercial, academic, and civil society supporters and partners to develop best practices for managing risk in AI and ML and assess whether historical data protection practices such as fairness, accountability, and transparency are sufficient to answer the ethical questions they raise.
Featured
FPF Statement on the adoption of the EU AI Act
“Today the European Union adopted the EU AI Act at the end of a long and intense legislative process. At the Future of Privacy Forum we believe that multistakeholder global approaches and advancing common understanding in the area of AI governance are key to ensuring a future with safe and trustworthy AI, one that protects […]
Future of Privacy Forum Awarded National Science Foundation and Department of Energy Grants to Advance White House Executive Order on Artificial Intelligence
The Future of Privacy Forum (FPF) has been awarded grants by the National Science Foundation (NSF) and the Department of Energy (DOE) to support FPF’s establishment of a Research Coordination Network (RCN) for Privacy-Preserving Data and Analytics. FPF’s work will support the development and deployment of Privacy Enhancing Technologies (PETs) for socially beneficial data sharing […]
FPF Joins the NIST Artificial Intelligence Safety Consortium
The Future of Privacy Forum (FPF) is collaborating with the National Institute of Standards and Technology (NIST) in the U.S. Artificial Intelligence Safety Institute Consortium to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world. This initiative will help prepare the U.S. […]
FPF Announces International Technology Policy Expert as New Head of Artificial Intelligence
FPF has appointed international technology policy expert Anne J. Flanagan as Vice President for Artificial Intelligence (AI). In this new role, Anne will lead the privacy organization’s portfolio of projects exploring the data flows driving algorithmic and AI products and services, their opportunities and risks, and the ethical and responsible development of this technology. Anne […]
Regu(AI)ting Health: Lessons for Navigating the Complex Code of AI and Healthcare Regulations
Authors: Stephanie Wong, Amber Ezzell, & Felicity Slater As an increasing number of organizations utilize artificial intelligence (“AI”) in their patient-facing services, health organizations are seizing the opportunity to take advantage of the new wave of AI-powered tools. Policymakers, from United States (“U.S.”) government agencies to the White House, have taken heed of this trend, […]
A Blueprint for the Future: White House and States Issue Guidelines on AI and Generative AI
Since July 2023, eight U.S. states (California, Kansas, New Jersey, Oklahoma, Oregon, Pennsylvania, Virginia, and Wisconsin) and the White House have published executive orders (EOs) to support the responsible and ethical use of artificial intelligence (AI) systems, including generative AI. In response to the evolving AI landscape, these directives signal a growing recognition of the […]
FPF and OneTrust Release Collaboration on Conformity Assessments under the proposed EU AI Act: A Step-by-Step Guide & Infographic
Today, the Future of Privacy Forum (FPF) and OneTrust released a collaboration on Conformity Assessments under the proposed EU AI Act: A Step-by-Step Guide and accompanying Infographic. Conformity Assessments are a key and overarching accountability tool introduced in the proposed EU Artificial Intelligence Act (EU AIA or AIA) for high-risk AI systems. Conformity Assessments are […]
FPF Statement on Biden-Harris AI Executive Order
The Biden-Harris AI plan is incredibly comprehensive, with a whole of government approach and with an impact beyond government agencies. Although the executive order focuses on the government’s use of AI, the influence on the private sector will be profound due to the extensive requirements for government vendors, worker surveillance, education and housing priorities, the […]
FPF Submits Comments to the FEC on the Use of Artificial Intelligence in Campaign Ads
On October 16, 2023, the Future of Privacy Forum submitted comments to the Federal Election Commission (FEC) on the use of artificial intelligence in campaign ads. The FEC is seeking comments in response to a petition that asked the Agency to initiate a rulemaking to clarify that its regulation on “fraudulent misrepresentation” applies to deliberately […]
Future of Privacy Forum and Leading Companies Release Best Practices for AI in Employment Relationships
Expert Working Group Focused on AI in Employment Launches Best Practices that Promote Non-Discrimination, Human Oversight, Transparency, and Additional Protections. Today, the Future of Privacy Forum (FPF), with ADP, Indeed, LinkedIn, and Workday — leading hiring and employment software developers — released Best Practices for AI and Workplace Assessment Technologies. The Best Practices guide makes […]