The FPF Center for Artificial Intelligence: Navigating AI Policy, Regulation, and Governance
The rapid deployment of Artificial Intelligence for consumer, enterprise, and government uses has created challenges for policymakers, compliance experts, and regulators. AI policy stakeholders are seeking sophisticated, practical policy information and analysis.
This is where the FPF Center for Artificial Intelligence comes in, expanding FPF’s role as the leading pragmatic and trusted voice for those who seek impartial, practical analysis of the latest challenges for AI-related regulation, compliance, and ethical use.
At the FPF Center for Artificial Intelligence, we help policymakers and privacy experts at organizations, civil society, and academics navigate AI policy and governance. The Center is supported by a Leadership Council of experts from around the globe. The Council consists of members from industry, academia, civil society, and current and former policymakers.
FPF has a long history of AI-related and emerging technology policy work that has focussed on data, privacy, and the responsible use of technology to mitigate harms. From FPF’s presentation to global privacy regulators about emerging AI technologies and risks in 2017 to our briefing for US Congressional members detailing the risks and mitigation strategies for AI-powered workplace tech in 2023, FPF has helped policymakers around the world better understand AI risks and opportunities while equipping data, privacy and AI experts with the information they need to develop and deploy AI responsibly in their organizations.
In 2024, FPF received a grant from the National Science Foundation (NSF) to advance the Whitehouse Executive Order in Artificial Intelligence to support the use of Privacy Enhancing Technologies (PETs) by government agencies and the private sector by advancing legal certainty, standardization, and equitable uses. FPF is also a member of the U.S. AI Safety Institute at the National Institute for Standards and Technology (NIST) where it focuses on assessing the policy implications of the changing nature of artificial intelligence.
Areas of work within the FPF Center for Artificial Intelligence include:
- Legislative Comparison
- Responsible AI Governance
- AI Policy by Sector
- AI Assessments & Analyses
- Novel AI Policy Issues
- AI and Privacy Enhancing Technologies
FPF’s new Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers.
FPF Center for AI Leadership Council
The FPF Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers.
We are delighted to announce the founding Leadership Council members:
- Estela Aranha, Member of the United Nations High-level Advisory Body on AI; Former State Secretary for Digital Rights, Ministry of Justice and Public Security, Federal Government of Brazil
- Jocelyn Aqua, Principal, Data, Risk, Privacy and AI Governance, PricewaterhouseCoopers LLP
- John Bailey, Nonresident Senior Fellow, American Enterprise Institute
- Lori Baker, Vice President, Data Protection & Regulatory Compliance, Dubai International Financial Centre Authority (DPA)
- Cari Benn, Assistant Chief Privacy Officer, Microsoft Corporation
- Andrew Bloom, Vice President & Chief Privacy Officer, McGraw Hill
- Kate Charlet, Head of Global Privacy, Safety, and Security Policy, Google
- Prof. Simon Chesterman, David Marshall Professor of Law & Vice Provost, National University of Singapore; Principal Researcher, Office of the UNSG’s Envoy on Technology, High-Level Advisory Body on AI
- Barbara Cosgrove, Vice President, Chief Privacy Officer, Workday
- Elizabeth Denham, Chief Policy Strategist, Information Accountability Foundation, Former UK ICO Commissioner and British Columbia Privacy Commissioner
- Lydia F. de la Torre, Senior Lecturer at University of California, Davis; Founder, Golden Data Law, PBC; Former California Privacy Protection Agency Board Member
- Leigh Feldman, SVP, Chief Privacy Officer, Visa Inc.
- Lindsey Finch, Executive Vice President, Global Privacy & Product Legal, Salesforce
- Harvey Jang, Vice President, Chief Privacy Officer, Cisco Systems, Inc.
- Emerald de Leeuw-Goggin, Global Head of AI Governance & Privacy, Logitech
- Ewa Luger, Professor of human-data interaction, University of Edinburgh; Co-Director, Bridging Responsible AI Divides (BRAID)
- Dr. Gianclaudio Malgieri, Associate Professor of Law & Technology at eLaw, University of Leiden
- State Senator James Maroney, Connecticut
- Christina Montgomery, Chief Privacy & Trust Officer, AI Ethics Board Chair, IBM
- Carolyn Pfeiffer, Senior Director, Privacy, AI & Ethics and DSSPE Operations, Johnson & Johnson Innovative Medicine
- Ben Rossen, Associate General Counsel, AI Policy & Regulation, OpenAI
- Crystal Rugege, Managing Director, Centre for the Fourth Industrial Revolution Rwanda
- Guido Scorza, Member, The Italian Data Protection Authority
- Nubiaa Shabaka, Global Chief Privacy Officer and Chief Cyber Legal Officer, Adobe, Inc.
- Rob Sherman, Vice President and Deputy Chief Privacy Officer for Policy, Meta
- Dr. Anna Zeiter, Vice President & Chief Privacy Officer, Privacy, Data & AI Responsibility, eBay
- Yeong Zee Kin, Chief Executive of Singapore Academy of Law and former Assistant Chief Executive (Data Innovation and Protection Group), Infocomm Media Development Authority of Singapore
For more information on the FPF Center for AI email [email protected]
Featured
Examining the Open Data Movement
The transparency goals of the open data movement serve important social, economic, and democratic functions in cities like Seattle. At the same time, some municipal datasets about the city and its citizens’ activities carry inherent risks to individual privacy when shared publicly. In 2016, the City of Seattle declared in its Open Data Policy that the city’s data would be “open by preference,” except when doing so may affect individual privacy.[1] To ensure its Open Data Program effectively protects individuals, Seattle committed to performing an annual risk assessment and tasked the Future of Privacy Forum (FPF) with creating and deploying an initial privacy risk assessment methodology for open data.
Public comments on proposed Open Data Risk Assessment for the City of Seattle
FPF requested feedback from the public on its proposed Draft Open Data Risk Assessment for the City of Seattle. In 2016, the City of Seattle declared in its Open Data Policy that the city’s data would be “open by preference,” except when doing so may affect individual privacy. To ensure its Open Data program effectively protects individuals, Seattle committed to performing an annual risk assessment and tasked FPF with creating and deploying an initial privacy risk assessment methodology for open data.
Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making
Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.
Understanding Corporate Data Sharing Decisions: Practices, Challenges, and Opportunities for Sharing Corporate Data with Researchers
Today, the Future of Privacy Forum released a new study, Understanding Corporate Data Sharing Decisions: Practices, Challenges, and Opportunities for Sharing Corporate Data with Researchers. In this report, we aim to contribute to the literature by seeking the “ground truth” from the corporate sector about the challenges they encounter when they consider making data available for academic research. We hope that the impressions and insights gained from this first look at the issue will help formulate further research questions, inform the dialogue between key stakeholders, and identify constructive next steps and areas for further action and investment.
New Study: Companies are Increasingly Making Data Accessible to Academic Researchers, but Opportunities Exist for Greater Collaboration
Washington, DC – Today, the Future of Privacy Forum released a new study, Understanding Corporate Data Sharing Decisions: Practices, Challenges, and Opportunities for Sharing Corporate Data with Researchers. In this report, FPF reveals findings from research and interviews with experts in the academic and industry communities. Three main areas are discussed: 1) The extent to which leading companies make data available to support published research that contributes to public knowledge; 2) Why and how companies share data for academic research; and 3) The risks companies perceive to be associated with such sharing, as well as their strategies for mitigating those risks.
FPF Comments on the FTC Informational Injury Workshop
On Friday, October 27, 2017, the Future of Privacy Forum filed comments with the Federal Trade Commission in advance of the December 12, 2017 Informational Injury Workshop. The purpose of the workshop is to examine consumer injury in the context of privacy and data security. FPF’s comments focus on describing the harms that can arise from automated decision-making as well as highlighting existing risk-based privacy analyses.
Artificial Intelligence, Machine Learning, and Ethical Applications
On September 25, 2017, the Future of Privacy Forum and the Information Accountability Foundation will co-host an official side event at the International Conference of Data Protection Commissioners. The event follows IAF’s publication of Artificial Intelligence, Ethics and Enhanced Data Stewardship, and FPF’s curation of leading research highlighting the privacy challenges posed by artificial intelligence.
FPF Joins Leading Civil Society Groups, Academics and Companies to Participate in the Work of the Partnership on AI
Today, the Partnership on AI announced a new group of key stakeholders who will work with the Partnership’s Board of Directors to define and advance a shared vision of artificial intelligence that benefits people and society. The Future of Privacy Forum is proud to join this organization and help drive this important work forward.
AI Ethics: The Privacy Challenge
The Future of Privacy Forum and the Brussels Privacy Hub of the Vrije Universiteit Brussel (VUB) are partnering with IEEE Security & Privacy in a call for papers focused on AI Ethics: The Privacy Challenge. Selected papers will be featured at The Brussels Privacy Symposium, an academic program jointly presented by the Brussels Privacy Hub of the VUB and FPF’s National Science Foundation supported Research Coordination Network.
Facial Recognition and Privacy
Facial Recognition is an exciting technology that promises a host of consumer benefits but also raises a range of privacy concerns. In order to help advance policy discussions around different uses of “computer vision,” we are releasing today a Facial Recognition Discussion Document. We hope the background review of current legal and policymaker guidance is […]