Warning Signs: Identifying Privacy and Security Risks to Machine Learning Systems
FPF is working with Immuta and others to explain the steps machine learning creators can take to limit the risk that data could be compromised or a system manipulated.
Understanding Artificial Intelligence and Machine Learning
The opening session of FPF’s Digital Data Flows Masterclass provided an educational overview of Artificial Intelligence and Machine Learning – featuring Dr. Swati Gupta, Assistant Professor in the H. Milton […]
Artificial Intelligence: Privacy Promise or Peril?
Understanding AI and its underlying algorithmic processes presents new challenges for privacy officers and others responsible for data governance in companies ranging from retailers to cloud service providers. In the absence of targeted legal or regulatory obligations, AI poses new ethical and practical challenges for companies that strive to maximize consumer benefits while preventing potential harms.
FPF Partner in algoaware Project Releases State of the Art Report
algoaware has released the first public version of the State of the Art Report, open for peer review. The report includes a comprehensive explanation of the key concepts of algorithmic decision-making, a summary of the academic debate and its most pressing issues, as well as an overview of the most recent and relevant initiatives and policy actions of the civil society as well as of national and international governing bodies.
Calls for Regulation on Facial Recognition Technology
We look forward to working with Microsoft, others in industry, and policymakers to “create policies, processes, and tools” to make responsible use of Facial Recognition technology a reality.
FPF Release: The Privacy Expert's Guide to AI And Machine Learning
Today, FPF announces the release of The Privacy Expert’s Guide to AI and Machine Learning. This guide explains the technological basics of AI and ML systems at a level of understanding useful for non-programmers, and addresses certain privacy challenges associated with the implementation of new and existing ML-based products and services.
Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making
Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.