
FPF at CPDP 2022: Panels and Side Events
As the annual Computers, Privacy and Data Protection (CPDP) conference took place in Brussels between May 23 and 25, several Future of Privacy Forum (FPF) staff took part in different panels and events organized by FPF or other organizations before and during the conference. In this blogpost, we provide an overview of such events, with […]

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making
Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.