Automated Decision-Making: Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.
FPF attempted to identify, articulate, and categorize the types of harm that may result from automated decision-making in Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making.
De-Identification: Legal rules for data should be calibrated depending on multiple gradations of identifiability and administrative controls should be credited as part of a responsible approach to de-identification efforts. FPF developed a practical framework for applying privacy protections based on the nature of data that is collected, the risks of re-identification, and the legal and administrative protections that may be applied. FPF is continuing to develop models that improve transparency and terminology around de-identification and that advance practical de-identification measures.
• FPF’s framework described in Shades of Gray: Seeing the Full Spectrum of Practical Data De-Identification, was published in the Santa Clara Law Review;
• FPF created a Visual Guide to Practical Data De-Identification.
• FPF held a workshop, Practical De-Identification to discuss what it means for data to be appropriately de-identified; • FPF held a forum, De-Identification: Practice and Policy, to discuss common uses of de-identification, implementation and best practices, and case studies; and
• FPF published Student Data and De-Identification: Understanding De-Identification of Education Records and Related Requirements of FERPA.
Ethics: FPF has called for new frameworks and standards to promote the ethical use of data for scientific research. Sponsored by the National Science Foundation and the Alfred P. Sloan Foundation, FPF held a day-long workshop to advance discussions of ethical review mechanisms for data collected in corporate, non-profit, and other non-academic settings. Workshop papers were published in Beyond IRBs: Ethical Review Processes for Big Data Research, an edition of the Washington & Lee School of Law’s online law review. FPF works with companies, civil society, and other thought leaders to identify ethical challenges posed by algorithmic decision-making and artificial intelligence, as well as potential solutions to promote fairness and mitigate the risk of algorithmic discrimination.
Brussels Privacy Symposium: FPF and the Vrije Universiteit Brussel established a joint program to develop and promote research, scholarship, and best practices to support beneficial uses of data while respecting individuals’ fundamental rights. The annual Brussels Privacy Symposium draws on the expertise of leading EU and US academics, industry practitioners, and policy makers to produce an annual workshop highlighting innovative research on emerging privacy issues. The Symposium launched in 2016 with an academic workshop titled Identifiability: Policy and Practical Solutions for Anonymization and Pseudonymization; the 2017 symposium will focus on the privacy implications of artificial intelligence.
Legislative Developments: Many of the significant uses of data that raise concerns are protected at least in part by legislation. As supporters of the benefits of responsible data use, we thought it would be helpful to assemble the following list of existing federal laws that prohibit discrimination in a variety of contexts.
FPF List of Federal Anti-Discrimination Laws
DOWNLOAD ONE PAGERS