Artificial intelligence and machine learning (AI/ML) generate significant value when used responsibly – and are the subject of growing investment for exactly these reasons. But AI/ML can also amplify organizations’ exposure to potential vulnerabilities, ranging from fairness and security issues to regulatory fines and reputational harm.
By Brenda Leong and Dr. Sara Jordan Machine learning-based technologies are playing a substantial role in the response to the COVID-19 pandemic. Experts are using machine learning to study the virus, test potential treatments, diagnose individuals, analyze the public health impacts, and more. Below, we describe some of the leading efforts and identify data protection […]
Immuta and the Future of Privacy Forum (FPF) today released a working white paper, Data Protection by Process: How to Operationalise Data Protection by Design for Machine Learning, that provides guidance on embedding data protection principles within the life cycle of a machine learning model. Data Protection by Design (DPbD) is a core data protection requirement […]
FPF and Immuta Examine Approaches That Can Limit Informational or Behavioral Harms WASHINGTON, D.C. – September 20, 2019 – The Future of Privacy Forum (FPF) released a white paper, WARNING SIGNS: The Future of Privacy and Security in an Age of Machine Learning, exploring how machine learning systems can be exposed to new privacy and […]
FPF is working with Immuta and others to explain the steps machine learning creators can take to limit the risk that data could be compromised or a system manipulated.
The media has recently labeled manipulated videos of people “deepfakes,” a portmanteau of “deep learning” and “fake,” on the assumption that AI-based software is behind them all. But the technology behind video manipulation is not all based on deep learning (or any form of AI), and what are lumped together as deepfakes actually differ depending on the particular technology used. So while the example videos above were all doctored in some way, they were not all altered using the same technological tools, and the risks they pose – particularly as to being identifiable as fake – may vary.
The opening session of FPF’s Digital Data Flows Masterclass provided an educational overview of Artificial Intelligence and Machine Learning – featuring Dr. Swati Gupta, Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech; and Dr. Oliver Grau, Chair of ACM’s Europe Technology Policy Committee, Intel Automated Driving Group, […]
Understanding AI and its underlying algorithmic processes presents new challenges for privacy officers and others responsible for data governance in companies ranging from retailers to cloud service providers. In the absence of targeted legal or regulatory obligations, AI poses new ethical and practical challenges for companies that strive to maximize consumer benefits while preventing potential harms.
algoaware has released the first public version of the State of the Art Report, open for peer review. The report includes a comprehensive explanation of the key concepts of algorithmic decision-making, a summary of the academic debate and its most pressing issues, as well as an overview of the most recent and relevant initiatives and policy actions of the civil society as well as of national and international governing bodies.
Today, FPF announces the release of The Privacy Expert’s Guide to AI and Machine Learning. This guide explains the technological basics of AI and ML systems at a level of understanding useful for non-programmers, and addresses certain privacy challenges associated with the implementation of new and existing ML-based products and services.