The FPF Center for Artificial Intelligence: Navigating AI Policy, Regulation, and Governance
The rapid deployment of Artificial Intelligence for consumer, enterprise, and government uses has created challenges for policymakers, compliance experts, and regulators. AI policy stakeholders are seeking sophisticated, practical policy information and analysis.
This is where the FPF Center for Artificial Intelligence comes in, expanding FPF’s role as the leading pragmatic and trusted voice for those who seek impartial, practical analysis of the latest challenges for AI-related regulation, compliance, and ethical use.
At the FPF Center for Artificial Intelligence, we help policymakers and privacy experts at organizations, civil society, and academics navigate AI policy and governance. The Center is supported by a Leadership Council of experts from around the globe. The Council consists of members from industry, academia, civil society, and current and former policymakers.
FPF has a long history of AI-related and emerging technology policy work that has focussed on data, privacy, and the responsible use of technology to mitigate harms. From FPF’s presentation to global privacy regulators about emerging AI technologies and risks in 2017 to our briefing for US Congressional members detailing the risks and mitigation strategies for AI-powered workplace tech in 2023, FPF has helped policymakers around the world better understand AI risks and opportunities while equipping data, privacy and AI experts with the information they need to develop and deploy AI responsibly in their organizations.
In 2024, FPF received a grant from the National Science Foundation (NSF) to advance the Whitehouse Executive Order in Artificial Intelligence to support the use of Privacy Enhancing Technologies (PETs) by government agencies and the private sector by advancing legal certainty, standardization, and equitable uses. FPF is also a member of the U.S. AI Safety Institute at the National Institute for Standards and Technology (NIST) where it focuses on assessing the policy implications of the changing nature of artificial intelligence.
Areas of work within the FPF Center for Artificial Intelligence include:
- Legislative Comparison
- Responsible AI Governance
- AI Policy by Sector
- AI Assessments & Analyses
- Novel AI Policy Issues
- AI and Privacy Enhancing Technologies
FPF’s new Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers.
FPF Center for AI Leadership Council
The FPF Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers.
We are delighted to announce the founding Leadership Council members:
- Estela Aranha, Member of the United Nations High-level Advisory Body on AI; Former State Secretary for Digital Rights, Ministry of Justice and Public Security, Federal Government of Brazil
- Jocelyn Aqua, Principal, Data, Risk, Privacy and AI Governance, PricewaterhouseCoopers LLP
- John Bailey, Nonresident Senior Fellow, American Enterprise Institute
- Lori Baker, Vice President, Data Protection & Regulatory Compliance, Dubai International Financial Centre Authority (DPA)
- Cari Benn, Assistant Chief Privacy Officer, Microsoft Corporation
- Andrew Bloom, Vice President & Chief Privacy Officer, McGraw Hill
- Kate Charlet, Head of Global Privacy, Safety, and Security Policy, Google
- Prof. Simon Chesterman, David Marshall Professor of Law & Vice Provost, National University of Singapore; Principal Researcher, Office of the UNSG’s Envoy on Technology, High-Level Advisory Body on AI
- Barbara Cosgrove, Vice President, Chief Privacy Officer, Workday
- Elizabeth Denham, Chief Policy Strategist, Information Accountability Foundation, Former UK ICO Commissioner and British Columbia Privacy Commissioner
- Lydia F. de la Torre, Senior Lecturer at University of California, Davis; Founder, Golden Data Law, PBC; Former California Privacy Protection Agency Board Member
- Leigh Feldman, SVP, Chief Privacy Officer, Visa Inc.
- Lindsey Finch, Executive Vice President, Global Privacy & Product Legal, Salesforce
- Harvey Jang, Vice President, Chief Privacy Officer, Cisco Systems, Inc.
- Emerald de Leeuw-Goggin, Global Head of AI Governance & Privacy, Logitech
- Ewa Luger, Professor of human-data interaction, University of Edinburgh; Co-Director, Bridging Responsible AI Divides (BRAID)
- Dr. Gianclaudio Malgieri, Associate Professor of Law & Technology at eLaw, University of Leiden
- State Senator James Maroney, Connecticut
- Christina Montgomery, Chief Privacy & Trust Officer, AI Ethics Board Chair, IBM
- Carolyn Pfeiffer, Senior Director, Privacy, AI & Ethics and DSSPE Operations, Johnson & Johnson Innovative Medicine
- Ben Rossen, Associate General Counsel, AI Policy & Regulation, OpenAI
- Crystal Rugege, Managing Director, Centre for the Fourth Industrial Revolution Rwanda
- Guido Scorza, Member, The Italian Data Protection Authority
- Nubiaa Shabaka, Global Chief Privacy Officer and Chief Cyber Legal Officer, Adobe, Inc.
- Rob Sherman, Vice President and Deputy Chief Privacy Officer for Policy, Meta
- Dr. Anna Zeiter, Vice President & Chief Privacy Officer, Privacy, Data & AI Responsibility, eBay
- Yeong Zee Kin, Chief Executive of Singapore Academy of Law and former Assistant Chief Executive (Data Innovation and Protection Group), Infocomm Media Development Authority of Singapore
For more information on the FPF Center for AI email [email protected]
Featured
FPF Submits Comments to the FEC on the Use of Artificial Intelligence in Campaign Ads
On October 16, 2023, the Future of Privacy Forum submitted comments to the Federal Election Commission (FEC) on the use of artificial intelligence in campaign ads. The FEC is seeking comments in response to a petition that asked the Agency to initiate a rulemaking to clarify that its regulation on “fraudulent misrepresentation” applies to deliberately […]
Future of Privacy Forum and Leading Companies Release Best Practices for AI in Employment Relationships
Expert Working Group Focused on AI in Employment Launches Best Practices that Promote Non-Discrimination, Human Oversight, Transparency, and Additional Protections. Today, the Future of Privacy Forum (FPF), with ADP, Indeed, LinkedIn, and Workday — leading hiring and employment software developers — released Best Practices for AI and Workplace Assessment Technologies. The Best Practices guide makes […]
How Data Protection Authorities are De Facto Regulating Generative AI
The Istanbul Bar Association IT Law Commission published Dr. Gabriela Zanfir-Fortuna’s article, “How Data Protection Authorities are De Facto Regulating Generative AI,” in their August monthly AI Working Group Bulletin, “Law in the Age of Artificial Intelligence” (Yapay Zekâ Çağinda Hukuk). Generative AI took the world by storm in the past year, with services like […]
FPF Releases Generative AI Internal Policy Checklist To Guide Development of Policies to Promote Responsible Employee Use of Generative AI Tools
Today, the Future of Privacy Forum (FPF) releases the Generative AI for Organizational Use: Internal Policy Checklist. With the proliferation of employee use of generative AI tools, this checklist provides organizations with a powerful tool to help revise their internal policies and procedures to ensure that employees are using generative AI in a way that […]
Insights into Brazil’s AI Bill and its Interaction with Data Protection Law: Key Takeaways from the ANPD’s Webinar
Authors: Júlia Mendonça and Mariana Rielli The following is a guest post to the FPF blog by Júlia Mendonça, Researcher at Data Privacy Brasil, and Mariana Rielli, Institutional Development Coordinator at Data Privacy Brasil. The guest blog reflects the opinion of the authors only. Guest blog posts do not necessarily reflect the views of FPF. […]
Newly Updated Report: The Spectrum of Artificial Intelligence – Companion to the FPF AI Infographic
Today, we are re-releasing the report: The Spectrum of Artificial Intelligence – Companion to the FPF AI Infographic with new updates to account for the development and use of advanced generative AI tools. In December 2020, FPF published the Spectrum of Artificial Intelligence – An Infographic Tool, designed to visually display the variety and complexity […]
Unveiling China’s Generative AI Regulation
Authors: Yirong Sun and Jingxian Zeng The following is a guest post to the FPF blog by Yirong Sun, research fellow at the New York University School of Law Guarini Institute for Global Legal Studies at NYU School of Law: Global Law & Tech and Jingxian Zeng, research fellow at the University of Hong Kong […]
AI Verify: Singapore’s AI Governance Testing Initiative Explained
In recent months, global interest in AI governance and regulation has expanded dramatically. Many identify a need for new governance and regulatory structures in response to the impressive capabilities of generative AI systems, such as OpenAI’s ChatGPT and DALL-E, Google’s Bard, Stable Diffusion, and more. While much of this attention focuses on the upcoming EU […]
Let’s Look at LLMs: Understanding Data Flows and Risks in the Workplace
Over the last few months, we have seen generative AI systems and Large Language Models (LLMs), like OpenAI’s ChatGPT, Google Bard, Stable Diffusion, and Dall-E, send shockwaves throughout society. Companies are racing to bake AI features into existing products and roll out new services. Many Americans are worrying whether generative AI and LLMs are going […]
Knowledge is Power: The Future of Privacy Forum launches FPF Training Program
“An investment in knowledge always pays the best interest”–Ben Franklin Let’s make 2023 the year we invest in ourselves, our teams, and the knowledge needed to best navigate this dynamic world of privacy and data protection. I am fortunate to know many of you who will read this blog post, but for those who I […]