The FPF Center for Artificial Intelligence: Navigating AI Policy, Regulation, and Governance
The rapid deployment of Artificial Intelligence for consumer, enterprise, and government uses has created challenges for policymakers, compliance experts, and regulators. AI policy stakeholders are seeking sophisticated, practical policy information and analysis.
This is where the FPF Center for Artificial Intelligence comes in, expanding FPF’s role as the leading pragmatic and trusted voice for those who seek impartial, practical analysis of the latest challenges for AI-related regulation, compliance, and ethical use.
At the FPF Center for Artificial Intelligence, we help policymakers and privacy experts at organizations, civil society, and academics navigate AI policy and governance. The Center is supported by a Leadership Council of experts from around the globe. The Council consists of members from industry, academia, civil society, and current and former policymakers.
FPF has a long history of AI-related and emerging technology policy work that has focused on data, privacy, and the responsible use of technology to mitigate harms. From FPF’s presentation to global privacy regulators about emerging AI technologies and risks in 2017 to our briefing for US Congressional members detailing the risks and mitigation strategies for AI-powered workplace tech in 2023, FPF has helped policymakers around the world better understand AI risks and opportunities while equipping data, privacy and AI experts with the information they need to develop and deploy AI responsibly in their organizations.
In 2024, FPF received a grant from the National Science Foundation (NSF) to advance the Whitehouse Executive Order in Artificial Intelligence to support the use of Privacy Enhancing Technologies (PETs) by government agencies and the private sector by advancing legal certainty, standardization, and equitable uses. FPF is also a member of the U.S. AI Safety Institute at the National Institute for Standards and Technology (NIST) where it focuses on assessing the policy implications of the changing nature of artificial intelligence.
Areas of work within the FPF Center for Artificial Intelligence include:
- Legislative Comparison
- Responsible AI Governance
- AI Policy by Sector
- AI Assessments & Analyses
- Novel AI Policy Issues
- AI and Privacy Enhancing Technologies
FPF’s new Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers.
FPF Center for AI Leadership Council
The FPF Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers.
We are delighted to announce the founding Leadership Council members:
- Estela Aranha, Member of the United Nations High-level Advisory Body on AI; Former State Secretary for Digital Rights, Ministry of Justice and Public Security, Federal Government of Brazil
- Jocelyn Aqua, Principal, Data, Risk, Privacy and AI Governance, PricewaterhouseCoopers LLP
- John Bailey, Nonresident Senior Fellow, American Enterprise Institute
- Lori Baker, Vice President, Data Protection & Regulatory Compliance, Dubai International Financial Centre Authority (DPA)
- Cari Benn, Assistant Chief Privacy Officer, Microsoft Corporation
- Andrew Bloom, Vice President & Chief Privacy Officer, McGraw Hill
- Kate Charlet, Head of Global Privacy, Safety, and Security Policy, Google
- Prof. Simon Chesterman, David Marshall Professor of Law & Vice Provost, National University of Singapore; Principal Researcher, Office of the UNSG’s Envoy on Technology, High-Level Advisory Body on AI
- Barbara Cosgrove, Vice President, Chief Privacy Officer, Workday
- Jo Ann Davaris, Vice President, Global Privacy, Booking Holdings Inc.
- Elizabeth Denham, Chief Policy Strategist, Information Accountability Foundation, Former UK ICO Commissioner and British Columbia Privacy Commissioner
- Lydia F. de la Torre, Senior Lecturer at University of California, Davis; Founder, Golden Data Law, PBC; Former California Privacy Protection Agency Board Member
- Leigh Feldman, SVP, Chief Privacy Officer, Visa Inc.
- Lindsey Finch, Executive Vice President, Global Privacy & Product Legal, Salesforce
- Harvey Jang, Vice President, Chief Privacy Officer, Cisco Systems, Inc.
- Lisa Kohn, Director of Public Policy, Amazon
- Emerald de Leeuw-Goggin, Global Head of AI Governance & Privacy, Logitech
- Caroline Louveaux, Chief Privacy Officer, MasterCard
- Ewa Luger, Professor of human-data interaction, University of Edinburgh; Co-Director, Bridging Responsible AI Divides (BRAID)
- Dr. Gianclaudio Malgieri, Associate Professor of Law & Technology at eLaw, University of Leiden
- State Senator James Maroney, Connecticut
- Christina Montgomery, Chief Privacy & Trust Officer, AI Ethics Board Chair, IBM
- Carolyn Pfeiffer, Senior Director, Privacy, AI & Ethics and DSSPE Operations, Johnson & Johnson Innovative Medicine
- Ben Rossen, Associate General Counsel, AI Policy & Regulation, OpenAI
- Crystal Rugege, Managing Director, Centre for the Fourth Industrial Revolution Rwanda
- Guido Scorza, Member, The Italian Data Protection Authority
- Nubiaa Shabaka, Global Chief Privacy Officer and Chief Cyber Legal Officer, Adobe, Inc.
- Rob Sherman, Vice President and Deputy Chief Privacy Officer for Policy, Meta
- Dr. Anna Zeiter, Vice President & Chief Privacy Officer, Privacy, Data & AI Responsibility, eBay
- Yeong Zee Kin, Chief Executive of Singapore Academy of Law and former Assistant Chief Executive (Data Innovation and Protection Group), Infocomm Media Development Authority of Singapore
For more information on the FPF Center for AI email [email protected]
Featured
New White Paper Explores Privacy and Security Risk to Machine Learning Systems
FPF and Immuta Examine Approaches That Can Limit Informational or Behavioral Harms WASHINGTON, D.C. – September 20, 2019 – The Future of Privacy Forum (FPF) released a white paper, WARNING SIGNS: The Future of Privacy and Security in an Age of Machine Learning, exploring how machine learning systems can be exposed to new privacy and […]
Warning Signs: Identifying Privacy and Security Risks to Machine Learning Systems
FPF is working with Immuta and others to explain the steps machine learning creators can take to limit the risk that data could be compromised or a system manipulated.
What is 5G Cell Technology? How Will It Affect Me?
The leap from 3G to 4G technology brought with it faster data transfer speeds, which supported widespread adoption of data cloud and streaming services, video conferencing, and Internet of Things devices such as digital home assistants and smartwatches. 5G technology has the potential to enable another wave of smart devices: always connected and always communicating to provide faster, more personalized services.
Digital Deep Fakes
The media has recently labeled manipulated videos of people “deepfakes,” a portmanteau of “deep learning” and “fake,” on the assumption that AI-based software is behind them all. But the technology behind video manipulation is not all based on deep learning (or any form of AI), and what are lumped together as deepfakes actually differ depending on the particular technology used. So while the example videos above were all doctored in some way, they were not all altered using the same technological tools, and the risks they pose – particularly as to being identifiable as fake – may vary.
FPF Letter to NY State Legislature
On Friday, June 14, FPF submitted a letter to the New York State Assembly and Senate supporting a well-crafted moratorium on facial recognition systems for security uses in public schools.
Understanding Artificial Intelligence and Machine Learning
The opening session of FPF’s Digital Data Flows Masterclass provided an educational overview of Artificial Intelligence and Machine Learning – featuring Dr. Swati Gupta, Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech; and Dr. Oliver Grau, Chair of ACM’s Europe Technology Policy Committee, Intel Automated Driving Group, […]
A Thoughtful Discussion of Privacy Issues Raised by AI and Machine Learning
Recently, the Future of Privacy Forum and the Brookings Institution held a discussion session bringing together Hill staff, industry representatives and civil society groups. The conversation was guided by Cam Kerry of Brookings and Brenda Leong and John Verdi from FPF. Topics included whether AI and machine learning issues should be covered in a comprehensive […]
Fairness, Ethics, & Privacy in Tech: A Discussion with Chanda Marlowe
After beginning her career as a high school English teacher, Chanda Marlowe’s career change led her to become FPF’s inaugural Christopher Wolf Diversity Law Fellow. She’s an expert on location and advertising technology, algorithmic fairness, and how vulnerable populations can be uniquely affected by privacy issues. What led you to the Future of Privacy Forum? I […]
Artificial Intelligence: Privacy Promise or Peril?
Understanding AI and its underlying algorithmic processes presents new challenges for privacy officers and others responsible for data governance in companies ranging from retailers to cloud service providers. In the absence of targeted legal or regulatory obligations, AI poses new ethical and practical challenges for companies that strive to maximize consumer benefits while preventing potential harms.
AI and Machine Learning: Perspectives with FPF’s Brenda Leong
As we prepare to toast our 10th anniversary, we’re hearing from FPF policy experts about important privacy issues. Today, Brenda Leong, FPF Senior Counsel and Director of Strategy, is sharing her perspective on AI and machine learning. Brenda also manages the FPF portfolio on biometrics, particularly facial recognition, and oversees strategic planning for the organization.Tell […]