About this Issue

Data technologies allow service providers to mine benign digital activities for information to generate revenue and provide valuable services — prompting concern from some and innovation from others.  However, the law privileges some types of information over others.  It may be unethical to collect or use sensitive data without adequate precautions and notifications.  Unfortunately, laws defining sensitive data can vary widely.  The Future of Privacy Forum recognizes the need to clarify these terms, and the Sensitive Data issue page serves as resource for the developments in the characterization, collection, and use of sensitive data.

FPF Letter to NY State Legislature
Spotlight

June 17, 2019 | Amelia Vance

FPF Letter to NY State Legislature

On Friday, June 14, FPF submitted a letter to the New York State Assembly and Senate supporting a well-crafted moratorium on facial recognition systems for security uses in public schools. 

Read More

What's Happening: Sensitive Data

FPF Letter to NY State Legislature
Top Story

June 17, 2019 | Amelia Vance

FPF Letter to NY State Legislature

On Friday, June 14, FPF submitted a letter to the New York State Assembly and Senate supporting a well-crafted moratorium on facial recognition systems for security uses in public schools. 

Read More
New FPF Study Documents More Than 150 European Companies Participating in the EU-US Data Transfer Mechanism
Top Story

December 20, 2018 | Jeremy Greenberg

New FPF Study Documents More Than 150 European Companies Participating in the EU-US Data Transfer Mechanism

New FPF Study Documents More Than 150 European Companies Participating in the EU-US Data Transfer Mechanism EU Companies’ Participation Grew by One Third Over the Past Year By Jeremy Greenberg Yesterday, the European Commission published its second annual review of the EU-U.S. Privacy Shield, finding that “the U.S. continues to ensure an adequate level of […]

Read More
Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making
Top Story

December 11, 2017 | Lauren Smith

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making

Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.

Read More