Data Sharing for Research Compendium Report and Case Studies
Data Sharing for Research Compendium of Case Studies, Analysis, and Recommendations is a report on corporate-academic partnerships that provides practical recommendations for companies and researchers who want to share data for research. The report, along with its case studies, demonstrates how for many organizations, data-sharing partnerships are transitioning from being considered an experimental business activity […]
Optum & The Mayo Clinic Win the 2022 Award for Research Data Stewardship
On Wednesday, May 10, 2023, the Future of Privacy Forum (FPF) honored representatives from Optum and the Mayo Clinic for their outstanding corporate-academic research data-sharing partnership at the 3rd annual Awards for Research Data Stewardship. The awards honor companies and researchers that prioritize privacy-oriented and ethical data sharing for research. In a keynote address, United […]
Race Equity, Surveillance in the Post-Roe Era, and Data Protection Frameworks in the Global South are Major Topics During This Year’s Privacy Papers for Policymakers Event
The Future of Privacy Forum (FPF) hosted a Capitol Hill event honoring 2022’s must-read privacy scholarship at the 13th annual Privacy Papers for Policymakers Awards ceremony. This year’s event featured an opening keynote by FTC Commissioner Alvaro Bedoya as well as facilitated discussions with the winning authors: Anita Allen, Anupam Chander, Eunice Park, Pawel Popiel, […]
The Playbook: Data Sharing for Research Report & Infographic
The Playbook: Data Sharing for Research is an FPF report on the best practices for instituting research data-sharing programs between corporations and research institutions. FPF also developed a summary of recommendations from the full report as well as an infographic on The Value of Responsible Data Sharing for Research. The playbook addresses vital steps for data […]
Privacy Metrics Report
FPF convened policy, academic, and industry privacy experts to discuss privacy metrics and their benefits, and published a report based on their discussions. Through these discussions, we learned that beyond demonstrating compliance, privacy metrics have emerged as a key measure to improve privacy program performance and maturity in terms of customer trust, risk mitigation, and […]
Privacy Harms, Global Privacy Regulation, and Algorithmic Decision Making are Major Topics During Privacy Papers for Policymakers Event
For the 12th year, the Future of Privacy Forum (FPF) hosted its Privacy Papers for Policymakers event, honoring the 2021 Privacy Papers for Policymakers Award winners. This year’s event featured an opening keynote by Colorado Attorney General Phil Weiser and facilitated discussions between the winning authors – Daniel Solove, Ben Green, Woody Hartzog, Neil Richards, […]
Nothing to Hide: Tools for Talking (and Listening) About Data Privacy for Integrated Data Systems Report
Data-driven and evidence-based social policy innovation can help governments serve communities better, smarter, and faster. Integrated Data Systems (IDS) use data that government agencies routinely collect in the normal course of delivering public services to shape local policy and practice. They can use data to evaluate the effectiveness of new initiatives or bridge gaps between public services and community providers.
Communicating about Data Privacy and Security Report
The ADRF Network is an evolving grassroots effort among researchers and organizations who are seeking to collaborate around improving access to and promoting the ethical use of administrative data in social science research. As supporters of evidence-based policymaking and research, FPF has been an integral part of the Network since its launch and has chaired the network’s Data Privacy and Security Working Group since November 2017.
Taming The Golem: Challenges of Ethical Algorithmic Decision-Making
This article examines the potential for bias and discrimination in automated algorithmic decision-making. As a group of commentators recently asserted, “[t]he accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology.” Yet this article rejects an approach that depicts every algorithmic process as a “black box” that is inevitably plagued by bias and potential injustice.
Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making Report & Infographic
Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.