|

FPF Releases Understanding Facial Detection, Characterization, and Recognition Technologies and Privacy Principles for Facial Recognition Technology in Commercial Applications

These resources will help businesses and policymakers better understand and evaluate the growing use of face-based biometric technology systems when used for consumer applications. Facial recognition technology can help users organize and label photos, improve online services for visually impaired users, and help stores and stadiums better serve customers. At the same time, the technology often involves the collection and use of sensitive biometric data, requiring careful assessment of the data protection issues raised. Understanding the technology and building trust are necessary to maximize the benefits and minimize the risks.

|

Taming The Golem: Challenges of Ethical Algorithmic Decision-Making

This article examines the potential for bias and discrimination in automated algorithmic decision-making. As a group of commentators recently asserted, “[t]he accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology.” Yet this article rejects an approach that depicts every algorithmic process as a “black box” that is inevitably plagued by bias and potential injustice.

|

FPF Publishes Model Open Data Benefit-Risk Analysis

This Report first describes inherent privacy risks in an open data landscape, with an emphasis on potential harms related to re-identification, data quality, and fairness. To address these risks, the Report includes a Model Open Data Benefit-Risk Analysis (“Model Analysis”). The Model Analysis evaluates the types of data contained in a proposed open dataset, the potential benefits – and concomitant risks – of releasing the dataset publicly, and strategies for effective de-identification and risk mitigation.

|

Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making

Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.

|

Privacy Engineering Research and the GDPR: A Trans-Atlantic Initiative

With this event, we aim to determine the relevant state of the art in privacy engineering; in particular, we will focus on those areas where the “art” needs to be developed further. The goal of this trans-Atlantic initiative is to identify open research and development tasks, which are needed to make the full achievement of the GDPR’s ambitions possible.

|

The Future of Microphones in Connected Devices

Today, FPF released a new Infographic: Microphones & the Internet of Things: Understanding Uses of Audio Sensors in Connected Devices (read the Press Release here). From Amazon Echos to Smart TVs, we are seeing more home devices integrate microphones, often to provide a voice user interface powered by cloud-based speech recognition.

|

Infographic: Data and the Connected Car – Version 1.0

On June 27, 2017, the Future of Privacy Forum released an infographic, “Data and the Connected Car – Version 1.0,” describing the basic data-generating devices and flows in today’s connected vehicles. The infographic will help consumers and businesses alike understand the emerging data ecosystems that power incredible new features—features that can warn drivers of an accident before they see it, or jolt them awake if they fall asleep at the wheel.