About this Issue

The proliferation of data collection has encouraged innovative uses of data in all corners of society. The analysis of individuals’ personal data once done by researchers in academic institutions has been embraced by a rapidly growing number of companies and not-for-profit organizations. High-profile cases of experimentation and manipulation of user data, as in the case of Facebook and OkCupid, threaten to lead organizations to keep research results confidential to avoid public scrutiny or potential legal liability. FPF calls for new regulatory frameworks and standards to protect the valuable research these organizations produce and to ensure the ethical collection and use of data for research purposes.

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making
Spotlight

December 11, 2017 | Lauren Smith

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making

Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.

Read More

What's Happening: Ethics

Taming The Golem: Challenges of Ethical Algorithmic Decision-Making
Top Story

March 2, 2018 | Melanie E. Bates

Taming The Golem: Challenges of Ethical Algorithmic Decision-Making

This article examines the potential for bias and discrimination in automated algorithmic decision-making. As a group of commentators recently asserted, “[t]he accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology.” Yet this article rejects an approach that depicts every algorithmic process as a “black box” that is inevitably plagued by bias and potential injustice.

Read More
Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making
Top Story

December 11, 2017 | Lauren Smith

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making

Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.

Read More
New Study: Companies are Increasingly Making Data Accessible to Academic Researchers, but Opportunities Exist for Greater Collaboration
Top Story

November 14, 2017 | Melanie E. Bates

New Study: Companies are Increasingly Making Data Accessible to Academic Researchers, but Opportunities Exist for Greater Collaboration

Washington, DC – Today, the Future of Privacy Forum released a new study, Understanding Corporate Data Sharing Decisions: Practices, Challenges, and Opportunities for Sharing Corporate Data with Researchers. In this report, FPF reveals findings from research and interviews with experts in the academic and industry communities. Three main areas are discussed: 1) The extent to which leading companies make data available to support published research that contributes to public knowledge; 2) Why and how companies share data for academic research; and 3) The risks companies perceive to be associated with such sharing, as well as their strategies for mitigating those risks.

Read More