About this Issue

Data technologies allow service providers to mine benign digital activities for information to generate revenue and provide valuable services — prompting concern from some and innovation from others.  However, the law privileges some types of information over others.  It may be unethical to collect or use sensitive data without adequate precautions and notifications.  Unfortunately, laws defining sensitive data can vary widely.  The Future of Privacy Forum recognizes the need to clarify these terms, and the Sensitive Data issue page serves as resource for the developments in the characterization, collection, and use of sensitive data.

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making
Spotlight

December 11, 2017 | Lauren Smith

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making

Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.

Read More

What's Happening: Sensitive Data

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making
Top Story

December 11, 2017 | Lauren Smith

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making

Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.

Read More
June 22nd Webinar: PII Cookies and De-Identification – Accounting for Shades of Grey
June 14th Event: A Roundtable on Ethics, Privacy, and Research Reviews