The FPF Center for Artificial Intelligence: Navigating AI Policy, Regulation, and Governance
The rapid deployment of Artificial Intelligence for consumer, enterprise, and government uses has created challenges for policymakers, compliance experts, and regulators. AI policy stakeholders are seeking sophisticated, practical policy information and analysis.
This is where the FPF Center for Artificial Intelligence comes in, expanding FPF’s role as the leading pragmatic and trusted voice for those who seek impartial, practical analysis of the latest challenges for AI-related regulation, compliance, and ethical use.
At the FPF Center for Artificial Intelligence, we help policymakers and privacy experts at organizations, civil society, and academics navigate AI policy and governance. The Center is supported by a Leadership Council of experts from around the globe. The Council consists of members from industry, academia, civil society, and current and former policymakers.
FPF has a long history of AI-related and emerging technology policy work that has focused on data, privacy, and the responsible use of technology to mitigate harms. From FPF’s presentation to global privacy regulators about emerging AI technologies and risks in 2017 to our briefing for US Congressional members detailing the risks and mitigation strategies for AI-powered workplace tech in 2023, FPF has helped policymakers around the world better understand AI risks and opportunities while equipping data, privacy and AI experts with the information they need to develop and deploy AI responsibly in their organizations.
In 2024, FPF received a grant from the National Science Foundation (NSF) to advance the Whitehouse Executive Order in Artificial Intelligence to support the use of Privacy Enhancing Technologies (PETs) by government agencies and the private sector by advancing legal certainty, standardization, and equitable uses. FPF is also a member of the U.S. AI Safety Institute at the National Institute for Standards and Technology (NIST) where it focuses on assessing the policy implications of the changing nature of artificial intelligence.
Areas of work within the FPF Center for Artificial Intelligence include:
- Legislative Comparison
- Responsible AI Governance
- AI Policy by Sector
- AI Assessments & Analyses
- Novel AI Policy Issues
- AI and Privacy Enhancing Technologies
FPF’s new Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers.
FPF Center for AI Leadership Council
The FPF Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers.
We are delighted to announce the founding Leadership Council members:
- Estela Aranha, Member of the United Nations High-level Advisory Body on AI; Former State Secretary for Digital Rights, Ministry of Justice and Public Security, Federal Government of Brazil
- Jocelyn Aqua, Principal, Data, Risk, Privacy and AI Governance, PricewaterhouseCoopers LLP
- John Bailey, Nonresident Senior Fellow, American Enterprise Institute
- Lori Baker, Vice President, Data Protection & Regulatory Compliance, Dubai International Financial Centre Authority (DPA)
- Cari Benn, Assistant Chief Privacy Officer, Microsoft Corporation
- Andrew Bloom, Vice President & Chief Privacy Officer, McGraw Hill
- Kate Charlet, Head of Global Privacy, Safety, and Security Policy, Google
- Prof. Simon Chesterman, David Marshall Professor of Law & Vice Provost, National University of Singapore; Principal Researcher, Office of the UNSG’s Envoy on Technology, High-Level Advisory Body on AI
- Barbara Cosgrove, Vice President, Chief Privacy Officer, Workday
- Jo Ann Davaris, Vice President, Global Privacy, Booking Holdings Inc.
- Elizabeth Denham, Chief Policy Strategist, Information Accountability Foundation, Former UK ICO Commissioner and British Columbia Privacy Commissioner
- Lydia F. de la Torre, Senior Lecturer at University of California, Davis; Founder, Golden Data Law, PBC; Former California Privacy Protection Agency Board Member
- Leigh Feldman, SVP, Chief Privacy Officer, Visa Inc.
- Lindsey Finch, Executive Vice President, Global Privacy & Product Legal, Salesforce
- Harvey Jang, Vice President, Chief Privacy Officer, Cisco Systems, Inc.
- Lisa Kohn, Director of Public Policy, Amazon
- Emerald de Leeuw-Goggin, Global Head of AI Governance & Privacy, Logitech
- Caroline Louveaux, Chief Privacy Officer, MasterCard
- Ewa Luger, Professor of human-data interaction, University of Edinburgh; Co-Director, Bridging Responsible AI Divides (BRAID)
- Dr. Gianclaudio Malgieri, Associate Professor of Law & Technology at eLaw, University of Leiden
- State Senator James Maroney, Connecticut
- Christina Montgomery, Chief Privacy & Trust Officer, AI Ethics Board Chair, IBM
- Carolyn Pfeiffer, Senior Director, Privacy, AI & Ethics and DSSPE Operations, Johnson & Johnson Innovative Medicine
- Ben Rossen, Associate General Counsel, AI Policy & Regulation, OpenAI
- Crystal Rugege, Managing Director, Centre for the Fourth Industrial Revolution Rwanda
- Guido Scorza, Member, The Italian Data Protection Authority
- Nubiaa Shabaka, Global Chief Privacy Officer and Chief Cyber Legal Officer, Adobe, Inc.
- Rob Sherman, Vice President and Deputy Chief Privacy Officer for Policy, Meta
- Dr. Anna Zeiter, Vice President & Chief Privacy Officer, Privacy, Data & AI Responsibility, eBay
- Yeong Zee Kin, Chief Executive of Singapore Academy of Law and former Assistant Chief Executive (Data Innovation and Protection Group), Infocomm Media Development Authority of Singapore
For more information on the FPF Center for AI email [email protected]
Featured
Synthetic Content: Exploring the Risks, Technical Approaches, and Regulatory Responses
Today, the Future of Privacy Forum (FPF) released a new report, Synthetic Content: Exploring the Risks, Technical Approaches, and Regulatory Responses, which analyzes the various approaches being pursued to address the risks associated with “synthetic” content – material produced by generative artificial intelligence (AI) tools. As more people use generative AI to create synthetic content, […]
Understanding Extended Reality Technology & Data Flows: Privacy and Data Protection Risks and Mitigation Strategies
This post is the second in a two-part series. Click here for FPF’s XR infographic. The first post in this series focuses on the key functions that XR devices may feature, and analyzes the kinds of sensors, data types, data processing, and transfers to other parties that power these functions. I. Introduction Today’s virtual (VR), […]
Understanding Extended Reality Technology & Data Flows: XR Functions
This post is the first in a two-part series on extended reality (XR) technology, providing an overview of the technology and associated privacy and data protection risks. Click here for FPF’s infographic, “Understanding Extended Reality Technology & Data Flows.” I. Introduction Today’s virtual (VR), mixed (MR), and augmented (AR) reality environments, collectively known as extended […]
New Infographic Highlights XR Technology Data Flows and Privacy Risks
As businesses increasingly develop and adopt extended reality (XR) technologies, including virtual (VR), mixed (MR), and augmented (AR) reality, the urgency to consider potential privacy and data protection risks to users and bystanders grows. Lawmakers, regulators, and other experts are increasingly interested in how XR technologies work, what data protection risks they pose, and what […]
BCI Technical and Policy Recommendations to Mitigate Privacy Risks
This is the final post of a four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs. Click here for FPF and IBM’s full report: Privacy and the Connected Mind. In case you missed them, read the […]
BCI Commercial and Government Use: Gaming, Education, Employment, and More
This post is the third in a four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs. Click here for FPF and IBM’s full report: Privacy and the Connected Mind. In case you missed them, read the […]
BCIs & Data Protection in Healthcare: Data Flows, Risks, and Regulations
This post is the second in a four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs. Click here for FPF and IBM’s full report: Privacy and the Connected Mind. In case you missed it, read the […]
Brain-Computer Interfaces & Data Protection: Understanding the Technology and Data Flows
This post is the first in a four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs. Click here for FPF and IBM’s full report: Privacy and the Connected Mind. Additionally, FPF-curated resources, including policy & regulatory […]
Organizations must lead with privacy and ethics when researching and implementing neurotechnology: FPF and IBM Live event and report release
A New FPF and IBM Report and Live Event Explores Questions About Transparency, Consent, Security, and Accuracy of Data The Future of Privacy Forum (FPF) and the IBM Policy Lab released recommendations for promoting privacy and mitigating risks associated with neurotechnology, specifically with brain-computer interface (BCI). The new report provides developers and policymakers with actionable […]
Five Top of Mind Data Protection Recommendations for Brain-Computer Interfaces
By Jeremy Greenberg, [email protected] and Katelyn Ringrose [email protected]. Key FPF-curated background resources – policy & regulatory documents, academic papers, and technical analyses regarding brain-computer interfaces are available here. Recently, Elon Musk livestreamed an update for Neuralink—his startup centered around creating brain-computer interfaces (BCIs). BCIs are an umbrella term for devices that detect, amplify, and translate […]