Privacy Scholarship Reporter – Issue 2

> Privacy Scholarship Reporter – Issue 2

Artificial Intelligence and Machine Learning: The Privacy Challenge

by FPF Staff

Building on our first issue, which discussed the various privacy challenges related to algorithmic accountability, Future of Privacy Forum’s Privacy Scholarship Reporter now turns its focus to thoughtful, academic considerations of the privacy challenges, and ethical data use considerations, of AI and Machine Learning.
Robot Hands
Artificial Intelligence is perhaps easier to intuitively grasp than to explicitly define – a truth that embodies the very challenge of trying to design machines that reflect what it’s like to be human. With every new technology, there is the question, “what ‘new’ privacy challenge does this platform, service, or capability pose?” Are there new privacy challenges in AI? Or perhaps there are just the same questions about consent, transparency, use, and control, but in new contexts and products? If there are new aspects – can the existing policy framework address them sufficiently? In AI, we may find that there are indeed challenges that expand beyond simply greater scope and scale, and that push us to define new tools with which to address them.
What we do know is that we cannot leave AI or Machine Learning in a black box. While retail recommendations for “people who bought this also bought that” seem clear and reasonable, what do we understand about stock-picking models that underlie our economy? A language translation program may feel straightforward, but what about the selection of news or travel options or job offers that are tied to your multi-language capabilities, or one’s demonstrated interest in – or distaste for – other cultures?
Machines are learning to read our emotions, interpret body language, and predict our comfort-seeking behavior. Are we building bigger and more impenetrable bubbles that will limit or divide us? Or are we creating more extended complex worlds that allow us to know and understand more about the world around us? How can we tell, and how can we understand and control our own data “selves” in the process? These are areas that deserve focused attention, and the scholarship addressing them is only just beginning.
In this issue are articles that provide an excellent basis and introduction to Machine Learning and Artificial Intelligence. They include publications that: propose methods that might combat potential discrimination and bias in predictive modeling based on AI; question whether existing regulatory and legal norms are sufficient for the challenges of AI (including privacy) or whether new frameworks may be desirable; ask how AI will impact individual rights, including 4th Amendment questions; delve into the tricky questions of ethics regarding the widespread use of AI systems in the general population; and explore what the presence of robots in our homes will do to our understanding of privacy.
Are there important scholarship missing from our list? Send your comments or feedback to [email protected]. We look forward to hearing from you.
Brenda Leong, Senior Counsel and Director of Strategy, FPF


Internetlock“Big Data, Artificial Intelligence, Machine Learning and Data Protection” by The United Kingdom Information Commissioner’s Office (March 2017)

This discussion paper looks at the implications of big data, artificial intelligence (AI) and machine learning for data protection, and explains the ICO’s views on these. It defines big data, AI and machine learning, and identifies the particular characteristics that differentiate them from more traditional forms of data processing. Realizing the benefits that can flow from big data analytics, the paper analyzes the main implications for data protection. It examines some of the tools and approaches that can help organizations ensure that their big data processing complies with data protection requirements. Also discussed is how data protection, as enacted in current legislation, will not work for big data analytics, and how the role of accountability in relation to the more traditional principle of transparency will increase. The main conclusions are that, while data protection can be challenging in a big data context, the benefits will not be achieved at the expense of data privacy rights; and meeting data protection requirements will benefit both organizations and individuals. After the conclusions six key recommendations for organizations using big data analytics are presented.

“Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI” by S. Bird, S. Barocas, K. Crawford, F. Diaz, H. Wallach, Microsoft Research New York City, Workshop on Fairness, Accountability, and Transparency in Machine Learning
This paper points out that while computer science has been performing large-scale experimentation for a long time, advances in artificial intelligence, novel autonomous systems for experimentation are raising complex, unanswered questions for the field. Some of these questions are computational, while others relate to the social and ethical implications of these systems. In this paper, the authors identify several questions about the social and ethical implications of autonomous experimentation systems. These questions concern the design of such systems, their effects on users, and their resistance to some common mitigations.
Abstract: In the field of computer science, large-scale experimentation on users is not new. However, driven by advances in artificial intelligence, novel autonomous systems for experimentation are emerging that raise complex, unanswered questions for the field. Some of these questions are computational, while others relate to the social and ethical implications of these systems. We see these normative questions as urgent because they pertain to critical infrastructure upon which large populations depend, such as transportation and healthcare. Although experimentation on widely used online platforms like Facebook has stoked controversy in recent years, the unique risks posed by autonomous experimentation have not received sufficient attention, even though such techniques are being trialled on a massive scale. In this paper, we identify several questions about the social and ethical implications of autonomous experimentation systems. These questions concern the design of such systems, their effects on users, and their resistance to some common mitigations.


Robots and people“Averting Robot Eyes” by M.E. Kaminski, M. Reuben, C. Grimm, W.D. Smart, Maryland Law Review, Vol. 76, p. 983, 2017
The authors argue that home robots will inevitably cause privacy harms while acknowledging that robots can provide beneficial services — as long as consumers trust them. This paper evaluates potential technological solutions that could help home robots keep their promises, avert their “eyes”, and otherwise mitigate privacy harms. The goal of the study is to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. Five principles for home robots and privacy design are proposed: data minimization, purpose specification, use limitation, honest anthropomorphism, and dynamic feedback and participation. Current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie, is also discussed.
Abstract: Home robots will cause privacy harms. At the same time, they can provide beneficial services — as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms.
We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology.


“Ethically Aligned Design” by The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, December 2016

The IEEE Global Initiative provides the opportunity to bring together multiple voices in the Artificial Intelligence and Autonomous Systems communities to identify and find consensus on timely issues. This document’s purpose is to advance a public discussion of how these intelligent and autonomous technologies can be aligned to moral values and ethical principles that prioritize human wellbeing. It includes eight sections, each addressing a specific topic related to AI/AS that has been discussed at length by a specific committee of The IEEE Global Initiative. Issues and candidate recommendations pertaining to these topics are listed in each committee section. The eight sections include; General Priciples, Embedding Values in Autonomous Intelligence Systems, Methodologies to Guide Ethical Research and Design, Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI), Personal Data and Individual Access Control, Reframing Autonomous Weapons Systems, Economics/Humanitarian Issues, and Law.


Wooden gavel with book“Law and Regulation of Artificial Intelligence and Robots: Conceptual Framework and Normative Implications” by N. Petit, University of Liege – School of Law; International Center for Law & Economics (ICLE), March 9, 2017

In light of the many challenges that affect attempts to devise law and regulation in a context of technological inception, this paper seeks to offer a methodology geared to the specific fields of AIs and robots. This paper addresses the following normative question: should a social planner adopt specific rules and institutions for AIs and robots or should the resolution of issues be left to Hume’s three “fundamental laws of nature”, namely ordinary rules on property and liability, contract laws and the court system? The four sections review the main regulatory approaches proposed in the existing AI and robotic literature, discusses identifiable regulatory trade-offs, that is the threats and opportunities created by the introduction of regulation in relation to AIs and robotic applications, specific areas of liability as a case-study and presents a possible methodology for the law and regulation of AIs and robots.
Abstract: Law and regulation of Artificial Intelligence (“AI”) and robots is emerging, fuelled by the introduction of industrial and commercial applications in society. A common thread to many regulatory initiatives is to occur without a clear or explicit methodological framework. In light of the many challenges that affect attempts to devise law and regulation in a context of technological incipiency, this paper seeks to offer a methodology geared to the specific fields of AIs and robots. At bottom, the paper addresses the following normative question: should a social planer adopt specific rules and institutions for AIs and robots or should the resolution of issues be left to Hume’s three “fundamental laws of nature”, namely ordinary rules on property and liability, contract laws and the courts system? To explore that question, the analysis is conducted under a public interest framework.
Section 1 reviews the main regulatory approaches proposed in the existing AI and robotic literature, and stresses their advantages and disadvantages. Section 2 discusses identifiable regulatory trade-offs, that is the threats and opportunities created by the introduction of regulation in relation to AIs and robotic applications. Section 3 focuses on the specific area of liability as a case-study. Finally, Section 4 proposes a possible methodology for the law and regulation of AIs and robots. In conclusion, the paper proposes to index the regulatory response upon the nature of the externality – positive or negative – created by an AI application, and to distinguish between discrete, systemic and existential externalities.


Internet of things“Machine Learning: The Power and Promise of Computers That Learn by Example” by The Royal Society, April 2017
This report by The Royal Society provides an excellent overview of machine learning, it’s potential and impact on society. Through this initiative, they sought to investigate the potential of machine learning over the next 5-10 years, and the barriers to realizing that potential. In doing so, the project engaged with key audiences — in policy, industry, academia and the public — to raise awareness of machine learning, understand views held by the public and contribute to the public debate about Machine Learning and identify the key social, ethical, scientific and technical issues it presents. Chapters five and six discuss the societal impact of Machine Learning which looks more closely at the privacy-focused challenges these technologies create both ethically and technologically.


“Rethinking the Fourth Amendment in the Age of Supercomputers, Artificial Intelligence, and Robots” by M. Reid, West Virginia Law Review, Vol. 119, No. 101, 2017
This paper posits that it is not farfetched to think law enforcement’s use of cognitive computing will extend to using thinking, real-time robots in the field in the not-so-distant future. IBM’s Watson currently uses its artificial intelligence to suggest medical diagnoses and treatment in the healthcare industry and assists the finance industry in improving investment decisions. In this article, the author explores the consequences of predictive and content analytics and the future of cognitive computing, such as utilizing “robots” such as an imaginary “Officer Joe Roboto” in the law enforcement context. Would our interactions with Officer Joe Roboto trigger the same Fourth Amendment concerns and protections as those when dealing with a flesh-and-blood police officer? Are we more afraid of a “robotic” Watson, its capabilities, and lack of feeling and biases, compared to a human law enforcement officer? This article attempts to explore the ramifications of using such computers/robots in the future.
Abstract: Law enforcement currently uses cognitive computers to conduct predictive and content analytics and manage information contained in large police data files. These big data analytics and insight capabilities are more effective than using traditional investigative tools and save law enforcement time and a significant amount of financial and personnel resources. It is not farfetched to think law enforcement’s use of cognitive computing will extend to using thinking, real-time robots in the field in the not-so-distant future. IBM’s Watson currently uses its artificial intelligence to suggest medical diagnoses and treatment in the healthcare industry and assists the finance industry in improving investment decisions. IBM and similar companies already offer predictive analytics and cognitive computing programs to law enforcement for real-time intelligence and investigative purposes. This article will explore the consequences of predictive and content analytics and the future of cognitive computing, such as utilizing “robots” such as an imaginary “Officer Joe Roboto” in the law enforcement context. Would our interactions with Officer Joe Roboto trigger the same Fourth Amendment concerns and protections as those when dealing with a flesh-and-blood police officer? Are we more afraid of a “robotic” Watson, its capabilities, and lack of feeling and biases, compared to a human law enforcement officer? Assuming someday in the future we might be able to solve the physical limitations of a robot, would a “robotic” officer be preferable to a human one? What sort of limitations would we place on such technology? This article attempts to explore the ramifications of using such computers/robots in the future. Autonomous robots with artificial intelligence and the widespread use of predictive analytics are the future tools of law enforcement in a digital age, and we must come up with solutions as to how to handle the appropriate use of these tools.


Analytics“Equality of Opportunity in Supervised Learning” by M. Hardt, E. Price, N. Sobrero, Cornell University Library, [v1] 7 Oct 2016
The authors of this paper use a case study of FICO credit scores to illustrate their notion that classification accuracy in supervised learning depends only on the joint statistics of the predictor and the protected attribute but not the interpretation of the individual features of the data. The study looks at the inherent limits of defining and identifying biases based on this notion and propose a criterion for producing discrimination against a specific sensitive attribute. They argue that one can optimally adjust any learned predictor to remove discrimination, according to their definition.
Abstract: We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individual features. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.