Warning Signs: The Future of Privacy and Security in an Age of Machine Learning Report
FPF released a white paper, WARNING SIGNS: The Future of Privacy and Security in an Age of Machine Learning, exploring how machine learning systems can be exposed to new privacy and security risks, and explaining approaches to data protection. Unlike traditional software, in machine learning systems privacy or security harms do not necessarily require direct access to underlying data or source code. The white paper presents a layered approach to data protection in machine learning, including recommending techniques such as noise injection, inserting intermediaries between training data and the model, making machine learning mechanisms transparent, access controls, monitoring, documentation, testing, and debugging.
Read the FPF Blog to learn more.