About this Issue

FPF has pursued a combination of practical strategies and high-level thought leadership to address new opportunities and privacy risks presented by novel uses of personal information. FPF has centered its big data work on de-identification and data research ethics. FPF is also pursuing new work related to the benefits and risks of algorithmic decision-making and artificial intelligence.

Read more

Automated Decision-Making: Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.

Highlights Include:

FPF attempted to identify, articulate, and categorize the types of harm that may result from automated decision-making in Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making.

De-Identification: Legal rules for data should be calibrated depending on multiple gradations of identifiability and administrative controls should be credited as part of a responsible approach to de-identification efforts. FPF developed a practical framework for applying privacy protections based on the nature of data that is collected, the risks of re-identification, and the legal and administrative protections that may be applied. FPF is continuing to develop models that improve transparency and terminology around de-identification and that advance practical de-identification measures.

Highlights Include:

• FPF’s framework described in Shades of Gray: Seeing the Full Spectrum of Practical Data De-Identification, was published in the Santa Clara Law Review;
• FPF created a Visual Guide to Practical Data De-Identification.
• FPF held a workshop, Practical De-Identification to discuss what it means for data to be appropriately de-identified; • FPF held a forum, De-Identification: Practice and Policy, to discuss common uses of de-identification, implementation and best practices, and case studies; and
• FPF published Student Data and De-Identification: Understanding De-Identification of Education Records and Related Requirements of FERPA.

Ethics: FPF has called for new frameworks and standards to promote the ethical use of data for scientific research. Sponsored by the National Science Foundation and the Alfred P. Sloan Foundation, FPF held a day-long workshop to advance discussions of ethical review mechanisms for data collected in corporate, non-profit, and other non-academic settings. Workshop papers were published in Beyond IRBs: Ethical Review Processes for Big Data Research, an edition of the Washington & Lee School of Law’s online law review. FPF works with companies, civil society, and other thought leaders to identify ethical challenges posed by algorithmic decision-making and artificial intelligence, as well as potential solutions to promote fairness and mitigate the risk of algorithmic discrimination.

Brussels Privacy Symposium: FPF and the Vrije Universiteit Brussel established a joint program to develop and promote research, scholarship, and best practices to support beneficial uses of data while respecting individuals’ fundamental rights. The annual Brussels Privacy Symposium draws on the expertise of leading EU and US academics, industry practitioners, and policy makers to produce an annual workshop highlighting innovative research on emerging privacy issues. The Symposium launched in 2016 with an academic workshop titled Identifiability: Policy and Practical Solutions for Anonymization and Pseudonymization; the 2017 symposium will focus on the privacy implications of artificial intelligence.

Legislative Developments: Many of the significant uses of data that raise concerns are protected at least in part by legislation. As supporters of the benefits of responsible data use, we thought it would be helpful to assemble the following list of existing federal laws that prohibit discrimination in a variety of contexts.

FPF List of Federal Anti-Discrimination Laws

DOWNLOAD ONE PAGERS

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making
Spotlight

December 11, 2017 | Lauren Smith

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making

Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.

Read More

What's Happening: Big Data

FPF Publishes Report Supporting Stakeholder Engagement and Communications for Researchers and Practitioners Working to Advance Administrative Data Research
Top Story

July 16, 2018 | Kelsey Finch

FPF Publishes Report Supporting Stakeholder Engagement and Communications for Researchers and Practitioners Working to Advance Administrative Data Research

The ADRF Network is an evolving grassroots effort among researchers and organizations who are seeking to collaborate around improving access to and promoting the ethical use of administrative data in social science research. As supporters of evidence-based policymaking and research, FPF has been an integral part of the Network since its launch and has chaired the network’s Data Privacy and Security Working Group since November 2017.

Read More
Taming The Golem: Challenges of Ethical Algorithmic Decision-Making
Top Story

March 2, 2018 | Melanie E. Bates

Taming The Golem: Challenges of Ethical Algorithmic Decision-Making

This article examines the potential for bias and discrimination in automated algorithmic decision-making. As a group of commentators recently asserted, “[t]he accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology.” Yet this article rejects an approach that depicts every algorithmic process as a “black box” that is inevitably plagued by bias and potential injustice.

Read More
New Future of Privacy Forum Study Finds the City of Seattle’s Open Data Program a National Leader in Privacy Program Management
Top Story

January 25, 2018 | Melanie E. Bates

New Future of Privacy Forum Study Finds the City of Seattle’s Open Data Program a National Leader in Privacy Program Management

Today, the Future of Privacy Forum released its City of Seattle Open Data Risk Assessment. The Assessment provides tools and guidance to the City of Seattle and other municipalities navigating the complex policy, operational, technical, organizational, and ethical standards that support privacy-protective open data programs.

Read More