Warning Signs: Identifying Privacy and Security Risks to Machine Learning Systems
Machine learning (ML) is a powerful tool, providing better health care, safer transportation, and greater efficiencies in manufacturing, retail, and online services. That’s why FPF is working with Immuta and others to explain the steps machine learning creators can take to limit the risk that data could be compromised or a system manipulated.
Today, FPF released a whitepaper, WARNING SIGNS: The Future of Privacy and Security in an Age of Machine Learning, exploring how machine learning systems can be exposed to new privacy and security risks, and explaining approaches to data protection. Unlike traditional software, in machine learning systems privacy or security harms do not necessarily require direct access to underlying data or source code.
The whitepaper presents a layered approach to data protection in machine learning, including recommending techniques such as noise injection, inserting intermediaries between training data and the model, making machine learning mechanisms transparent, access controls, monitoring, documentation, testing, and debugging.
My co-authors of the paper are Andrew Burt, Immuta Chief Privacy Officer and Legal Engineer, Sophie Stalla-Bourdillon, Immuta Senior Privacy Counsel and Legal Engineer, and Patrick Hall, H2O.ai Senior Director for Data Science Products.
The whitepaper released today builds on the analysis in Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models, released by FPF and Immuta in June 2018.
Andrew and I will discuss the findings of the WARNING SIGNS whitepaper at the Strata Data Conference in New York City during the “War Stories from the Front Lines of ML” panel at 1:15 p.m. on September 25, 2019, and the “Regulations and the Future of Data” panel at 2:05 p.m. on the same day. If you’ll be at the conference, we hope you’ll join us!