Research often requires using sensitive data to answer important questions. The ethical collection and analysis of personal information can be challenging to do while still protecting the privacy of the implicated individuals, honoring informed consent, and complying with other legal obligations. The technology, policies, and ethical considerations for researchers are constantly shifting, sometimes making it difficult to keep up. That’s why FPF engages stakeholders across academia and industry to produce recommendations, best practices, and ethical review structures that promote responsible research. Our work is centered around streamlining, encouraging, and promoting responsible research that respects essential privacy and ethical considerations throughout the research lifecycle. FPF works with policymakers to develop legislative protections that support effective, responsible research with strong privacy safeguards, including hosting events that allow policymakers and regulators to engage directly with practitioners from academia, advocacy, and industry.
FPF also has an Ethics and Data in Research Working Group. This group receives late-breaking analysis of emerging legislation affecting research and data, meets to discuss the ethical and technological challenges of conducting research, and collaborates to create best practices to protect privacy, decrease risk, and increase data sharing for research, partnerships, and infrastructure. Learn more and join here.
Today, the Future of Privacy Forum (FPF) published “The Playbook: Data Sharing for Research,” a report on best practices for instituting research data-sharing programs between corporations and research institutions. FPF also developed a summary of recommendations from the full report. Facilitating data sharing for research purposes between corporate data holders and academia can unlock new scientific […]
Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.