Future of Privacy Forum Publishes Report Exploring Organizations’ Emerging Practices and Challenges Assessing AI Risks
As AI models and systems become more widespread and powerful, FPF’s report finds many organizations are taking a four-step approach to managing potential risks
With growing focus from policymakers and regulators on the impact of artificial intelligence (AI) systems, and as organizations strive to responsibly use AI systems, organizations are increasingly embracing AI impact assessments to assess risks and take steps to minimize them. In response to the growing use—and uncertainty around—AI impact assessments, the Future of Privacy Forum (FPF) Center for Artificial Intelligence published a new report, “AI Governance Behind the Scenes: Emerging Practices For AI Impact Assessments” to examine the considerations, emerging practices, and challenges that companies are experiencing as they endeavor to harness AI’s potential while mitigating potential harms.
“Companies are embedding AI into their systems for a variety of uses from research to enterprise and entertainment, though questions remain around how to implement AI models in a responsible, ethical manner,” said Daniel Berrick, FPF’s Counsel for Artificial Intelligence and the report’s author. “This report underscores that much more work needs to be done to ensure that companies can operationalize AI impact assessments, identify risks, and implement robust risk management practices. We hope this resource, built from conversations with a range of stakeholders, can serve as a resource for those evaluating how to deploy emerging technologies responsibly.”
Though recent years have witnessed a growing number of laws and resources on AI governance, many organizations remain uncertain about what AI impact assessments entail or which framework to use. In light of this emerging dynamic, FPF surveyed over 60 private sector stakeholders to gain insight into what common approaches companies are employing and the challenges they face when conducting AI impact assessments. FPF found that companies are converging on several practices for conducting AI impact assessments, such as accounting for both intended and unintended uses of AI models and systems. However, practitioners continue to face several challenges at different points in the assessment process.
FPF found:
- Many organizations are struggling to obtain the full extent of relevant information from model developers and system providers;
- Organizations have different levels of sophistication in their abilities to assess the levels of AI risks across varied contexts;
- There is a lack of clarity regarding how best to measure risk management strategies’ effectiveness; and
- Novel uses of AI can create uncertainty about when risk has been brought within acceptable levels.
- Other insights include: 1) when gathering model-system information, organizations typically seek a variety of information, such as details about an AI model’s training, use cases, capabilities, and more; 2) a growing number of organizations have sought to integrate AI impact assessments into existing enterprise risk management processes, including those around privacy; and 3) when identifying and testing for AI-related risk, organizations may use both qualitative and quantitative approaches.
Organizations seeking to enhance their AI Impact Assessments should consider:
- Bolstering processes for gathering information from third-party models developers and system vendors;
- Improving internal education about AI risks; and
- Enhancing techniques that measure risk management strategies’ effectiveness.
“FPF’s Center for Artificial Intelligence was created to act as a collaborative force for shared knowledge between stakeholders and support the responsible development of AI. The Center’s report addresses key knowledge gaps and promotes collaboration,” said John Verdi, Senior Vice President for Policy at FPF. “FPF’s report was created with input from dozens of expert stakeholders, and it is the culmination of six months of convenings, interviews and workshops aimed at describing the state of play.”
The report dives deeper into the trends and challenges companies take at each step when conducting AI impact assessments and the circumstances that trigger them. To learn more, read the new report, here.
###
About Future of Privacy Forum (FPF)
The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections.
FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Learn more at fpf.org.
Reach out to [email protected] with any questions.