TEN QUESTIONS ON AI RISK

|

Gauging the Liabilities of Artificial Intelligence ​Within​ Your Organization

Artificial intelligence and machine learning (AI/ML) generate significant value when used responsibly – and are the subject of growing investment for exactly these reasons. But AI/ML can also amplify organizations’ exposure to potential vulnerabilities, ranging from fairness and security issues to regulatory fines and reputational harm.

Many businesses are incorporating ever more machine-learning based models into their operations, both on the backend and in consumer facing contexts. For those companies who are not developers of these systems themselves, but who use these systems, they assume the responsibility of managing, overseeing, and controlling these algorithmically-based learning models, in many cases without extensive internal resources to meet the technical demands they incur.

General application toolkits for this challenge are not yet broadly available, and to help fill that gap while more technical support is developed, we have created a checklist focused on asking questions to carry out sufficient oversight for these systems. The questions in the attached checklist – “Ten Questions on AI Risk” – are meant to serve as an initial guide to gauging these risks, both during the build phase of AI/ML endeavors and beyond.

While there is no “one size fits all” answer for how to manage and monitor AI systems, these questions will hopefully provide a guide for companies using such models, allowing them to customize the questions and frame the answers in contexts specific to their own products, services, and internal operations. We hope to build on this start and offer additional, detailed resources for such organizations in the future.

The attached document was prepared by bnh.ai, a boutique law firm specializing in AI/ML analytics, in collaboration with the Future of Privacy Forum.