This Report first describes inherent privacy risks in an open data landscape, with an emphasis on potential harms related to re-identification, data quality, and fairness. To address these risks, the Report includes a Model Open Data Benefit-Risk Analysis (“Model Analysis”). The Model Analysis evaluates the types of data contained in a proposed open dataset, the potential benefits – and concomitant risks – of releasing the dataset publicly, and strategies for effective de-identification and risk mitigation.
Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.
With this event, we aim to determine the relevant state of the art in privacy engineering; in particular, we will focus on those areas where the “art” needs to be developed further. The goal of this trans-Atlantic initiative is to identify open research and development tasks, which are needed to make the full achievement of the GDPR’s ambitions possible.
Today, FPF released a new Infographic: Microphones & the Internet of Things: Understanding Uses of Audio Sensors in Connected Devices (read the Press Release here). From Amazon Echos to Smart TVs, we are seeing more home devices integrate microphones, often to provide a voice user interface powered by cloud-based speech recognition.
On June 27, 2017, the Future of Privacy Forum released an infographic, “Data and the Connected Car – Version 1.0,” describing the basic data-generating devices and flows in today’s connected vehicles. The infographic will help consumers and businesses alike understand the emerging data ecosystems that power incredible new features—features that can warn drivers of an accident before they see it, or jolt them awake if they fall asleep at the wheel.
Washington, DC – Today, the Future of Privacy Forum (FPF) and the Data Quality Campaign (DQC) relaunched FERPA|Sherpa, the leading resource for information about education privacy issues. Named after the core federal law that governs education privacy, FERPA|Sherpa provides students, parents, schools, ed tech companies, and policymakers with easy access to the resources, best practices, and guidelines that are essential to understanding the complex privacy issues arising at the intersection of kids, schools, and technology.
There are many lessons to learn from the spread of the WannaCry ransomware attacks across the globe. One lesson that needs more attention is the danger that exists when a government attempts to create mandatory backdoors into computer software and systems.
During the International Association of Privacy Professional’s Global Privacy Summit 2017, FPF’s CEO, Jules Polonetsky, took a moment to speak with NBC 4 Los Angeles about the privacy implications of granting apps permission to track your location.
Today, the Future of Privacy Forum is releasing a new tool for municipal and technology leaders: a visual guide “Shedding Light on Smart City Privacy.” This tool will help citizens, companies, and communities understand the technologies at the heart of smart city and smart community projects – and their potential impact on privacy.
Kelsey Finch, FPF Policy Counsel, presented FPF’s 2016 Mobile Apps Study at the Federal Trade Commission’s annual PrivacyCon on January 12, 2017. Kelsey presented a visual representation of the App Study designed by FPF Fellow, Carolina Alonso. See the visual.