On Friday, October 27, 2017, the Future of Privacy Forum filed comments with the Federal Trade Commission in advance of the December 12, 2017 Informational Injury Workshop. The purpose of the workshop is to examine consumer injury in the context of privacy and data security. FPF’s comments focus on describing the harms that can arise from automated decision-making as well as highlighting existing risk-based privacy analyses.
Analysis of personal data can be used to improve services, promote inclusion, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or disparate impacts on vulnerable communities. In FPF’s preliminary review of the relevant literature and public policy regarding automated decision-making, we found that the concerns identified by leaders in this space fall into four broad categories of potential harms: (1) loss of opportunity; (2) economic loss; (3) social stigmatization; and (4) loss of liberty. Depending on the context and circumstances, we determined that each of these categories of harms can accrue to individuals, groups, or society as a whole. Notably, not all harms described in existing literature will necessarily be legally cognizable – although they may be widely considered unfair – while some may already be illegal under existing laws.
Regarding potential solutions, we explain strategies that generally fall in one of four categories: (1) algorithmic design solutions; (2) business process solutions; (3) legal and policy solutions; or (4) data methods solutions. As with harms, these potential solutions describe the universe of proposals rather than specific recommended solutions. It is also important to recognize that proposed solutions may sometimes impact other important values, such as freedom of speech or economic competition. Their use may need to be considered on a case-by-case basis and by a balancing of the benefits and risks of intervention.
The challenges of conceptualizing informational injury are increasingly relevant as risk-based privacy analyses become more common in law, policy, and internal business practices. One long-standing legal basis for processing data in the European Union is the “legitimate interests” framework, which has similarities to the FTC’s unfairness analysis under Section 5 of the FTC Act. Under this basis for lawful processing, companies may engage in lawful data processing if their legitimate interests are not “overridden by the interests or fundamental rights and freedoms of the data subject.” In addition, under the General Data Protection Regulation (GDPR) that will come into effect in May 2018, companies are required to carry out a data protection impact assessment if data processing is “likely to result in a high risk to the rights and freedoms of natural persons.” In each of these benefit-risk analyses, the underlying risk relies on an accurate assessment of the nature of informational injuries.
We see a promising set of solutions arising in literature and regulatory conversations on the topic of automated decision-making and risk-based analyses, and we look forward to a robust conversation on these issues at the upcoming FTC workshop.