The European Commission published a Communication on “Artificial Intelligence for Europe” on April 24th 2018. It highlights the transformative nature of AI technology for the world and it calls for the EU to lead the way in the approach of developing AI on a fundamental rights framework. AI for good and for all is the motto the Commission proposes. The Communication could be summed up as announcing concrete funding for research projects, clear social goals and more thinking about everything else.
The ADRF Network is an evolving grassroots effort among researchers and organizations who are seeking to collaborate around improving access to and promoting the ethical use of administrative data in social science research. As supporters of evidence-based policymaking and research, FPF has been an integral part of the Network since its launch and has chaired the network’s Data Privacy and Security Working Group since November 2017.
Beyond Explainability aims to provide a template for effectively managing this risk in practice, with the goal of providing lawyers, compliance personnel, data scientists, and engineers a framework to safely create, deploy, and maintain ML, and to enable effective communication between these distinct organizational perspectives.
Researchers at Princeton University’s Center for Information Technology Policy (CITP) have demonstrated that many websites are using third-party tools to track visitors’ individual browsing sessions. “Session replay scripts” can raise serious privacy concerns if implemented incorrectly — but with the right safeguards, can be part of a range of ordinary, useful web analytics tools. FPF has published a 3-page guide for Privacy Professionals to assist in deciding whether and how to implement session replay scripts.
FPF requested feedback from the public on its proposed Draft Open Data Risk Assessment for the City of Seattle. In 2016, the City of Seattle declared in its Open Data Policy that the city’s data would be “open by preference,” except when doing so may affect individual privacy. To ensure its Open Data program effectively protects individuals, Seattle committed to performing an annual risk assessment and tasked FPF with creating and deploying an initial privacy risk assessment methodology for open data.
Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.
Today, the Future of Privacy Forum released a new study, Understanding Corporate Data Sharing Decisions: Practices, Challenges, and Opportunities for Sharing Corporate Data with Researchers. In this report, we aim to contribute to the literature by seeking the “ground truth” from the corporate sector about the challenges they encounter when they consider making data available for academic research. We hope that the impressions and insights gained from this first look at the issue will help formulate further research questions, inform the dialogue between key stakeholders, and identify constructive next steps and areas for further action and investment.
Washington, DC – Today, the Future of Privacy Forum released a new study, Understanding Corporate Data Sharing Decisions: Practices, Challenges, and Opportunities for Sharing Corporate Data with Researchers. In this report, FPF reveals findings from research and interviews with experts in the academic and industry communities. Three main areas are discussed: 1) The extent to which leading companies make data available to support published research that contributes to public knowledge; 2) Why and how companies share data for academic research; and 3) The risks companies perceive to be associated with such sharing, as well as their strategies for mitigating those risks.
Today, the Future of Privacy Forum (FPF) released “Law Enforcement Access to Student Records: A Guide for School Administrators & Ed Tech Service Providers,” written by Amelia Vance and Sarah Williamson. This guide helps to answer some of the basic questions that we have heard from key stakeholders about law enforcement access to data over the past nine months.
The Future of Privacy Forum conducted a study of the companies enrolled in the US-EU Privacy Shield program and determined that 114 European headquartered companies are active Privacy Shield Participants. These European companies rely on the program to transfer data to their US subsidiaries or to essential vendors that support their business needs.