AI and Machine Learning: Leading Academic Publications

Leading academics around the world are focused on the ethical, theoretical, and practical challenges that AI and ML pose – whether in commercial, social, or legal settings – and considering everything from biased algorithms to robot rights. Here is a collectionof many of the leading papers with summaries of their themes

* Organized by date of publication

2020

State of AI Report 2020, Nathan Benaich & Ian Hogarth, October 1, 2020

Now in its third year, the State of AI Report 2020 features invited contributions from a range of well-known and up-and-coming companies and research groups. The Report considers the following key dimensions: Research: Technology breakthroughs and capabilities; Talent: supply, demand and concentration of AI talent; Industry: areas of commercial application for AI and its business impact; Politics: regulation of AI, its economic implications and the emerging geopolitics of AI; Predictions: what we believe will happen and a performance review to keep us honest.

Enhancing Privacy in Robotics Via Judicious Sensor Selection, Stephen Eick & Annie I. Antón, September 15, 2020

Roboticists are grappling with how to address privacy in robot design at a time when regulatory frameworks around the world increasingly require systems to be engineered to preserve and protect privacy. This paper surveys the top robotics journals and conferences over the past four decades to identify contributions with respect to privacy in robot design. Our survey revealed that less than half of one percent of the ~89,120 papers in our study even mention the word privacy. Herein, we propose privacy preserving approaches for roboticists to employ in robot design, including, assessing a robot’s purpose and environment; ensuring privacy by design by selecting sensors that do not collect information that is not essential to the core objectives of that robot; embracing both privacy and performance as fundamental design challenges to be addressed early in the robot lifecycle; and performing privacy impact assessments.

Word Meaning in Minds and Machines, Brenden M. Lake & Gregory L. Murphy, August 4, 2020

Machines show an increasingly broad set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP). Many algorithms stem from past computational work in psychology, raising the question of whether they understand words as people do. In this paper, we compare how humans and machines represent the meaning of words. We argue that contemporary NLP systems are promising models of human word similarity, but they fall short in many other respects. Current models are too strongly linked to the text-based patterns in large corpora, and too weakly linked to the desires, goals, and beliefs that people use words in order to express. Word meanings must also be grounded in vision and action, and capable of flexible combinations, in ways that current systems are not. We pose concrete challenges for developing machines with a more human-like, conceptual basis for word meaning. We also discuss implications for cognitive science and NLP.

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims, Miles Brundage et al., April 20, 2020

With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held accountable. Those outside of a given organization also need effective means of scrutinizing such claims. This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. We analyze ten mechanisms for this purpose–spanning institutions, software, and hardware–and make recommendations aimed at implementing, exploring, or improving those mechanisms.

Why Fairness Cannot be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI, Sandra Wachter et al., March 27, 2020

In recent years a substantial literature has emerged concerning bias, discrimination, and fairness in AI and machine learning. Connecting this work to existing legal non-discrimination frameworks is essential to create tools and methods that are practically useful across divergent legal regimes. While much work has been undertaken from an American legal perspective, comparatively little has mapped the effects and requirements of EU law. This Article addresses this critical gap between legal, technical, and organisational notions of algorithmic fairness. Through analysis of EU non-discrimination law and jurisprudence of the European Court of Justice (ECJ) and national courts, we identify a critical incompatibility between European notions of discrimination and existing work on algorithmic and automat-ed fairness. A clear gap exists between statistical measures of fairness as embedded in myriad fairness toolkits and governance mechanisms and the context-sensitive, often intuitive and ambiguous discrimination metrics and evidential requirements used by the ECJ; we refer to this approach as “contextual equality.” This Article makes three contributions. First, we review the evidential requirements to bring a claim under EU non-discrimination law. Due to the disparate nature of algorithmic and human discrimination, the EU’s current requirements are too contextual, reliant on intuition, and open to judicial interpretation to be automated. Many of the concepts fundamental to bringing a claim, such as the composition of the disadvantaged and advantaged group, the severity and type of harm suffered, and requirements for the relevance and admissibility of evidence, require normative or political choices to be made by the judiciary on a case-by-case basis. We show that automating fairness or non-discrimination in Europe may be impossible because the law, by design, does not provide a static or homogenous framework suited to testing for discrimination in AI systems. Second, we show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate. Humans discriminate due to negative attitudes (e.g. stereotypes, prejudice) and unintentional biases (e.g. organisational practices or internalised stereotypes) which can act as a signal to victims that discrimination has occurred. Equivalent signalling mechanisms and agency do not exist in algorithmic systems. Compared to traditional forms of discrimination, automated discrimination is more abstract and unintuitive, subtle, intangible, and difficult to detect. The increasing use of algorithms disrupts traditional legal remedies and procedures for detection, investigation, prevention, and correction of discrimination which have predominantly relied upon intuition. Consistent assessment procedures that define a common standard for statistical evidence to detect and assess prima facie automated discrimination are urgently needed to support judges, regulators, system controllers and developers, and claimants. Finally, we examine how existing work on fairness in machine learning lines up with procedures for assessing cases under EU non-discrimination law. A ‘gold standard’ for assessment of prima facie discrimination has been advanced by the European Court of Justice but not yet translated into standard assessment procedures for automated discrimination. We propose ‘conditional demographic disparity’ (CDD) as a standard baseline statistical measurement that aligns with the Court’s ‘gold standard’. Establishing a standard set of statistical evidence for automated discrimination cases can help ensure consistent procedures for assessment, but not judicial interpretation, of cases involving AI and automated systems. Through this proposal for procedural regularity in the identification and assessment of auto-mated discrimination, we clarify how to build considerations of fairness into automated systems as far as possible while still respecting and enabling the contextual approach to judicial interpretation practiced under EU non-discrimination law.

Why Am I Seeing This?: How Video and E-Commerce Platforms Use Recommendation Systems to Shape User Experiences, Spandana Singh, Mar. 25, 2020

Internet platforms are increasingly adopting artificial intelligence and machine learning tools in order to shape the content we see and engage with online. Today, numerous internet platforms utilize algorithmic decision-making to provide users with recommendations on content, connections, purchases, and more. This report is the last in a series of four reports that explore different issues regarding how internet platforms use automated tools to shape the content we see and influence how this content is delivered to us. The first report in this series focused on how automated tools can be leveraged to moderate content online. The second report focused on how internet platforms deploy algorithms to rank and curate content in search engine results and in news feeds. The third report focused on how platforms use artificial intelligence to optimize the targeting and delivery of advertisements. This final report focuses on how platforms use automated tools to make recommendations to users. All four of these reports also seek to explore how internet platforms, policymakers, and researchers can better promote fairness, accountability, and transparency around these automated tools and decision-making practices.

A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing, Navdeep Gill et al., February 29, 2020

Over the long run, technology has improved the human condition. Nevertheless, the economic progress from technological innovation has not arrived equitably or smoothly. While innovation often produces great wealth, it has also often been disruptive to labor, society, and world order. In light of ongoing advances in artificial intelligence (“AI”), we should prepare for the possibility of extreme disruption, and act to mitigate its negative impacts. This report introduces a new policy lever to this discussion: the Windfall Clause.

Four Principles of Explainable AI as Applied to Biometrics and Facial Forensic Algorithms, P. Jonathon Phillips et al., February 3, 2020

Traditionally, researchers in automatic face recognition and biometric technologies have focused on developing accurate algorithms. With this technology being integrated into operational systems, engineers and scientists are being asked, do these systems meet societal norms? The origin of this line of inquiry is ‘trust’ of artificial intelligence (AI) systems. In this paper, we concentrate on adapting explainable AI to face recognition and biometrics, and we present four principles of explainable AI to face recognition and biometrics. The principles are illustrated by four case studies, which show the challenges and issues in developing algorithms that can produce explanations.

The Windfall Clause: Distributing the Benefits of AI, Cullen O’Keefe et al., January 30, 2020

This manuscript outlines a viable approach for training and evaluating machine learning systems for high-stakes, human-centered, or regulated applications using common Python programming tools. The accuracy and intrinsic interpretability of two types of constrained models, monotonic gradient boosting machines and explainable neural networks, a deep learning architecture well-suited for structured data, are assessed on simulated data and publicly available mortgage data. For maximum transparency and the potential generation of personalized adverse action notices, the constrained models are analyzed using post-hoc explanation techniques including plots of partial dependence and individual conditional expectation and with global and local Shapley feature importance. The constrained model predictions are also tested for disparate impact and other types of discrimination using measures with long-standing legal precedents, adverse impact ratio, marginal effect, and standardized mean difference, along with straightforward group fairness measures. By combining interpretable models, post-hoc explanations, and discrimination testing with accessible software tools, this text aims to provide a template workflow for machine learning applications that require high accuracy and interpretability and that mitigate risks of discrimination.

Doctor XAI: An Ontology-Based Approach to Black-Box Sequential Data Classification Explanations, Cecilia Paniguitti et al., January 23, 2020

Disruptive technologies arrive with regularity. Whether it is the first industrial revolution with steam powered factories and transportation, or subsequent revolutions which brought about chemical engineering, communications revolutions, aviation and eventually biotechnology and digitisation. We stand at the edge of the next revolution the AI revolution where methods of artificial intelligence and machine learning offer possibilities hitherto unimagined. How this revolution develops and how our society absorbs the potential of this new technology will be largely determined by the models of regulation and governance applied to the nascent technology. In this paper the authors examine lessons from history and propose a framework for identifying and analysing the key elements of regulatory regimes and their interactions which can form the basis for developing a new model for AI regulatory systems. Furthermore, it argues that the goals of such systems should be to manage the risks different models and uses of AI pose, not just the ethical issues they create.

Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society, Carina Prunkl & Jess Whittleston, January 21, 2020

One way of carving up the broad ‘AI ethics and society’ research space that has emerged in recent years is to distinguish between ‘near-term’ and ‘long-term’ research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research questions and priorities are framed. We highlight some ambiguities and inconsistencies in how the distinction is used, and argue that while there are differing priorities within this broad research community, these differences are not well-captured by the near/long-term distinction. We unpack the near/long-term distinction into four different dimensions, and propose some ways that researchers can communicate more clearly about their work and priorities using these dimensions. We suggest that moving towards a more nuanced conversation about research priorities can help establish new opportunities for collaboration, aid the development of more consistent and coherent research agendas, and enable identification of previously neglected research areas.

The Incentives that Shape Behaviour, Ryan Carey et al., January 20, 2020

Which variables does an agent have an incentive to control with its decision, and which variables does it have an incentive to respond to? We formalize these incentives, and demonstrate unique graphical criteria for detecting them in any single decision causal influence diagram. To this end, we introduce structural causal influence models, a hybrid of the influence diagram and structural causal model frameworks. Finally, we illustrate how these incentives predict agent incentives in both fairness and AI safety applications.

Social and Governance Implications of Improved Data Efficiency, Aaron D. Tucker et al., January 14, 2020

Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the social-economic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent data-rich AI firms, exposing them to more competition from data-poor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency – as more actors gain access to any level of capability – the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the “AI production function”, will be key to understanding the development of the AI industry and its societal impacts.

The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?, Toby Shevlane & Allan Dafoe, January 9, 2020

There is growing concern over the potential misuse of artificial intelligence (AI) research. Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse. This paper addresses the balance between these two effects. Our theoretical framework elucidates the factors governing whether the published research will be more useful for attackers or defenders, such as the possibility for adequate defensive measures, or the independent discovery of the knowledge outside of the scientific community. The balance will vary across scientific fields. However, we show that the existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software vulnerabilities. While disclosure of software vulnerabilities often favours defence, this cannot be assumed for AI research. The AI research community should consider concepts and policies from a broad set of adjacent fields, and ultimately needs to craft policy well-suited to its particular challenges.

Pairwise Fairness for Ranking and Regression, Harikrishna Narasimhan et al., January 7, 2020

We present pairwise fairness metrics for ranking models and regression models that form analogues of statistical fairness notions such as equal opportunity, equal accuracy, and statistical parity. Our pairwise formulation supports both discrete protected groups, and continuous protected attributes. We show that the resulting training problems can be efficiently and effectively solved using existing constrained optimization and robust optimization techniques developed for fair classification. Experiments illustrate the broad applicability and trade-offs of these methods.

Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing, Inioluwa Deborah Raji et al., January 3, 2020

Rising concern for the societal implications of artificial intelligence systems has inspired a wave of academic and journalistic literature in which deployed systems are audited for harm by investigators from outside the organizations deploying the algorithms. However, it remains challenging for practitioners to identify the harmful repercussions of their own systems prior to deployment, and, once deployed, emergent issues can become difficult or impossible to trace back to their source. In this paper, we introduce a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development lifecycle. Each stage of the audit yields a set of documents that together form an overall audit report, drawing on an organization’s values or principles to assess the fit of decisions made throughout the process. The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity.

Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI, Michael A. Madaio & Jennifer Wortman Vaughan, 2020

Many organizations have published principles intended to guide the ethical development and deployment of AI systems; however, their abstract nature makes them difficult to operationalize. Some organizations have therefore produced AI ethics checklists, as well as checklists for more specific concepts, such as fairness, as applied to AI systems. But unless checklists are grounded in practitioners’ needs, they may be misused. To understand the role of checklists in AI ethics, we conducted an iterative co-design process with 48 practitioners, focusing on fairness. We co-designed an AI fairness checklist and identified desiderata and concerns for AI fairness checklists in general. We found that AI fairness checklists could provide organizational infrastructure for formalizing ad-hoc processes and empowering individual advocates. We discuss aspects of organizational culture that may impact the efficacy of such checklists, and highlight future research directions.

2019

Regulating AI and Machine Learning: Setting the Regulatory Agenda, Julia Black & Andrew Murray, 2019

Disruptive technologies arrive with regularity. Whether it is the first industrial revolution with steam powered factories and transportation, or subsequent revolutions which brought about chemical engineering, communications revolutions, aviation and eventually biotechnology and digitisation. We stand at the edge of the next revolution the AI revolution where methods of artificial intelligence and machine learning offer possibilities hitherto unimagined. How this revolution develops and how our society absorbs the potential of this new technology will be largely determined by the models of regulation and governance applied to the nascent technology. In this paper the authors examine lessons from history and propose a framework for identifying and analysing the key elements of regulatory regimes and their interactions which can form the basis for developing a new model for AI regulatory systems. Furthermore, it argues that the goals of such systems should be to manage the risks different models and uses of AI pose, not just the ethical issues they create.

U.S. Public Opinion on the Governance of Artificial Intelligence, Baobao Zhang & Allan Dafoe, December 30, 2019

Artificial intelligence (AI) has widespread societal implications, yet social scientists are only beginning to study public attitudes toward the technology. Existing studies find that the public’s trust in institutions can play a major role in shaping the regulation of emerging technologies. Using a large-scale survey (N=2000), we examined Americans’ perceptions of 13 AI governance challenges as well as their trust in governmental, corporate, and multistakeholder institutions to responsibly develop and manage AI. While Americans perceive all of the AI governance issues to be important for tech companies and governments to manage, they have only low to moderate trust in these institutions to manage AI applications.

A Survey on Distributed Machine Learning, Joost Verbraeken et al., December 20, 2019

The demand for artificial intelligence has grown significantly over the last decade and this growth has been fueled by advances in machine learning techniques and the ability to leverage hardware acceleration. However, in order to increase the quality of predictions and render machine learning solutions feasible for more complex applications, a substantial amount of training data is required. Although small machine learning models can be trained with modest amounts of data, the input for training larger models such as neural networks grows exponentially with the number of parameters. Since the demand for processing training data has outpaced the increase in computation power of computing machinery, there is a need for distributing the machine learning workload across multiple machines, and turning the centralized into a distributed system. These distributed systems present new challenges, First and foremost the efficient parallelization of the training process and the creation of a coherent model. This article provides an extensive overview of the current state-of-the-art in the !eld by outlining the challenges and opportunities of distributed machine learning over conventional (centralized) machine learning, discussing the techniques used for distributed machine learning, and providing an overview of the systems that are available.

(When) is Truth-Telling Favored in AI Debate?, Vojtech Kovarik & Ryan Carey, December 15, 2019

For some problems, it is difficult for humans to judge the goodness of AI-proposed solutions. Irving, Christiano, and Amodei (2018) propose that in such cases, we may use a debate between two AI systems to assist the human judge to select a good answer. We introduce a mathematical framework for modelling this type of debate and propose that the quality of debate designs may be measured by the accuracy of the most persuasive answer. We describe a simple instance of the debate framework called feature debate and analyze the degree to which such debates track the truth. We argue that despite being very simple, feature debates capture many aspects of practical debates such as the incentives to confuse the judge or stall to prevent losing. We analyze two special types of debates, those where arguments constitute independent evidence about the topic, and those where the information bandwidth of the judge is limited.

Asymptotically Unambitious Artificial General Intelligence, Michael K. Cohen et al., December 12, 2019

General intelligence, the ability to solve arbitrary solvable problems, is supposed by many to be artificially constructible. Narrow intelligence, the ability to solve a given particularly difficult problem, has seen impressive recent development. Notable examples include self-driving cars, Go engines, image classifiers, and translators. Artificial General Intelligence (AGI) presents dangers that narrow intelligence does not: if something smarter than us across every domain were indifferent to our concerns, it would be an existential threat to humanity, just as we threaten many species despite no ill will. Even the theory of how to maintain the alignment of an AGI’s goals with our own has proven highly elusive. We present the first algorithm we are aware of for asymptotically unambitious AGI, where “unambitiousness” includes not seeking arbitrary power. Thus, we identify an exception to the Instrumental Convergence Thesis, which is roughly that by default, an AGI would seek power, including over us.

Privacy Risks and Explaining Machine Learning Models, Reza Shokri et al., December 4, 2019

Can an adversary exploit model explanations to infer sensitive information about the models’ training set? To investigate this question, we first focus on membership inference attacks: given a data point and a model explanation, the attacker’s goal is to decide whether or not the point belongs to the training data. We study this problem for two popular transparency methods: gradient-based attribution methods and record-based influence measures. We develop membership inference attacks based on these model explanations, and extensively test them on a variety of datasets. For gradient-based methods, we show that the explanations can leak a significant amount of information about the individual data points in the training set, much beyond what is leaked through the predicted labels. We also show that record-based measures can be effectively, and even more significantly, exploited for membership inference attacks. More importantly, we design reconstruction attacks against this class of model explanations. We demonstrate that they can be exploited to recover significant parts of the training set. Finally, our results indicate that minorities and outliers are more vulnerable to these type of attacks than the rest of the population. Thus, there is a significant disparity for the privacy risks of model explanations across different groups.

Proposed Guidelines for the Responsible Use of Explainable Machine Learning, Patrick Hall et al., November 29, 2019

Explainable machine learning (ML) enables human learning from ML, human appeal of automated model decisions, regulatory compliance, and security audits of ML models.1,2,3 Explainable ML (i.e. explainable artificial intelligence or XAI) has been implemented in numerous open source and commercial packages and explainable ML is also an important, mandatory, or embedded aspect of commercial predictive modeling in industries like financial services.4,5,6 However, like many technologies, explainable ML can be misused, particularly as a faulty safeguard for harmful blackboxes, e.g. fairwashing or scaffolding, and for other malevolent purposes like stealing models and sensitive training data [1], [38], [40], [42], [45]. To promote best-practice discussions for this already in-flight technology, this short text presents internal definitions and a few examples in Section 2 before covering the proposed guidelines in Subsections 3.1 – 3.4. This text concludes in Section 4 with a seemingly natural argument for the use of interpretable models and explanatory, debugging, and disparate impact testing methods in life- or mission-critical ML systems.

On the Legal Compatibility of Fairness Definitions, Alice Xiang & Inioluwa Deborah Raji, November 25, 2019

Past literature has been effective in demonstrating ideological gaps in machine learning (ML) fairness definitions when considering their use in complex sociotechnical systems. However, we go further to demonstrate that these definitions often misunderstand the legal concepts from which they purport to be inspired, and consequently inappropriately co-opt legal language. In this paper, we demonstrate examples of this misalignment and discuss the differences in ML terminology and their legal counterparts, as well as what both the legal and ML fairness communities can learn from these tensions. We focus this paper on U.S. anti-discrimination law since the ML fairness research community regularly references terms from this body of law.

3PS – Online Privacy Through Group Identities, Pol Mac Aonghusa & Douglas J. Leith, November 17, 2019

Limiting online data collection to the minimum required for specific purposes is mandated by modern privacy legislation such as the General Data Protection Regulation (GDPR) and the California Consumer Protection Act. This is particularly true in online services where broad collection of personal information represents an obvious concern for privacy. We challenge the view that broad personal data collection is required to provide personalised services. By first developing formal models of privacy and utility, we show how users can obtain personalised content, while retaining an ability to plausibly deny their interests in topics they regard as sensitive using a system of proxy, group identities we call 3PS. Through extensive experiment on a prototype implementation, using openly accessible data sources, we show that 3PS provides personalised content to individual users over 98% of the time in our tests, while protecting plausible deniability effectively in the face of worst-case threats from a variety of attack types.

Experiences with Improving the Transparency of AI Models and Services, Michael Hind et al., November 11, 2019

AI models and services are used in a growing number of highstakes areas, resulting in a need for increased transparency. Consistent with this, several proposals for higher quality and more consistent documentation of AI data, models, and systems have emerged. Little is known, however, about the needs of those who would produce or consume these new forms of documentation. Through semi-structured developer interviews, and two document creation exercises, we have assembled a clearer picture of these needs and the various challenges faced in creating accurate and useful AI documentation. Based on the observations from this work, supplemented by feedback received during multiple design explorations and stakeholder conversations, we make recommendations for easing the collection and flexible presentation of AI facts to promote transparency.

Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms, Zhong Qui Lin et al., October 29, 2019

There has been a significant surge of interest recently around the concept of explainable artificial intelligence (XAI), where the goal is to produce an interpretation for a decision made by a machine learning algorithm. Of particular interest is the interpretation of how deep neural networks make decisions, given the complexity and `black box’ nature of such networks. Given the infancy of the field, there has been very limited exploration into the assessment of the performance of explainability methods, with most evaluations centered around subjective visual interpretation of the produced interpretations. In this study, we explore a more machine-centric strategy for quantifying the performance of explainability methods on deep neural networks via the notion of decision-making impact analysis. We introduce two quantitative performance metrics: i) Impact Score, which assesses the percentage of critical factors with either strong confidence reduction impact or decision changing impact, and ii) Impact Coverage, which assesses the percentage coverage of adversarially impacted factors in the input. A comprehensive analysis using this approach was conducted on several state-of-the-art explainability methods (LIME, SHAP, Expected Gradients, GSInquire) on a ResNet-50 deep convolutional neural network using a subset of ImageNet for the task of image classification. Experimental results show that the critical regions identified by LIME within the tested images had the lowest impact on the decision-making process of the network (~38%), with progressive increase in decision-making impact for SHAP (~44%), Expected Gradients (~51%), and GSInquire (~76%). While by no means perfect, the hope is that the proposed machine-centric strategy helps push the conversation forward towards better metrics for evaluating explainability methods and improve trust in deep neural networks.

Rising Through the Ranks: How Algorithms Rank and Curate Content in Search Results and on News Feeds, Spandana Singh, October 21, 2019

Internet platforms are increasingly adopting artificial intelligence and machine-learning tools in order to shape the content we see and engage with online. Algorithms have long been deployed to rank and curate search engine results. And thanks to advances over the last decade, these algorithms also play a growing role in shaping the content we see in news feeds. This report is the second in a series of four reports that will explore different issues regarding how automated tools are used by internet platforms to shape the content we see and influence how this content is delivered to us. The first report in this series focused on how automated tools can be leveraged to moderate content online. This second report explores how internet platforms deploy algorithms to rank and curate content in search engine results and in news feeds. The following two reports will focus on how artificial intelligence is used to optimize the target and delivery of advertisements and the delivery of content recommendations to users based on their prior consumption of content. All four of these reports also seek to explore how internet platforms, policymakers, and researchers can better promote fairness, accountability, and transparency around these automated tools and decision-making practices.

Fairness-Aware Machine Learning: An Extensive Overview, Jannik Dunkelau & Michael Leuschel, October 17, 2019

In today’s world, artificial intelligence (AI) increasingly surrounds us in our day- to-day lives. This is especially true for machine learning algorithms, which learn their behaviours by recognising patterns in existent data and apply it to new instances to make correct precisions quickly. This is desirable as it reduces the factor of human error and speeds up various processes, taking less than a second for a decision which would take a human worker multiple minutes. For instance, a company can reliantly speed up its hiring process by algorithmically filtering through hundreds of applications, leaving a more manageable amount for human review. The recidivism risk-scores for criminals can also be computationally determined, reducing human error in this regard, leading to a more reliant scoring system altogether. Another example might be the admission of students into universities, favouring those who have a higher chance of graduating instead of dropping out. However, besides not using any sensitive attribute like race, sex, age, or religion as input the algorithms might still learn how to discriminate against them. This gives way for new legal implications, as well as ethical problems. The fairness-aware machine learning community only began to develop in the last ten years, with the first publication to the best of our knowledge leading back to Pedreschi et al. in 2008 [151]. Since then there is a steady grow of interest, Fairness-Aware Machine Learning 3 giving way to a multitude of different fairness notions, as well as algorithms for preventing machine-learned bias in the first place. In this survey paper, we will compile the current stand of research regarding fairness-aware machine learning. This includes definitions of different fairness notions and algorithms, as well as discussion of problems and sourced of machine discrimination. By bundling the information from different sources, this paper serves as a rich entry-point for researchers new to the area, as well as an extensive fairness bibliography, spanning also legal references and examples of employed machine learning systems. The remainder of this paper is structured as follows: the rest of this section motivates fairness-aware algorithms on legal grounds, discusses various causes of unfairness, and the resulting implications of discrimination in Sections 1.1 to 1.3 respectively. Section 2 goes over related work, i.e. other survey papers consider- ing different parts of the whole research area. Section 3 establishes a common ground for nomenclature used throughout the paper with Section 4 introducing the necessary mathematical notation. Sections 5 to 8 list various definitions of algorithmic fairness, as well as pre-, in-, and post-processing algorithms found in an extensive literature review. In Section 9, different frameworks, toolkits, as well as common databases used in literature are presented. The paper concludes with some final remarks in Section 10.

Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations, Margot E. Kaminski & Gianclaudio Malgieri, October 6, 2019

Policy-makers, scholars, and commentators are increasingly concerned with the risks of using profiling algorithms and automated decision-making. The EU’s General Data Protection Regulation (GDPR) has tried to address these concerns through an array of regulatory tools. As one of us has argued, the GDPR combines individual rights with systemic governance, towards algorithmic accountability. The individual tools are largely geared towards individual “legibility”: making the decision-making system understandable to an individual invoking her rights. The systemic governance tools, instead, focus on bringing expertise and oversight into the system as a whole, and rely on the tactics of “collaborative governance,” that is, use public-private partnerships towards these goals. How these two approaches to transparency and accountability interact remains a largely unexplored question, with much of the legal literature focusing instead on whether there is an individual right to explanation.

The Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology, ACM, September 26, 2019

The explosion in the use of software in important sociotechnical systems has renewed focus on the study of the way technical constructs reflect policies, norms, and human values. This effort requires the engagement of scholars and practitioners from many disciplines. And yet, these disciplines often conceptualize the operative values very differently while referring to them using the same vocabulary. The resulting conflation of ideas confuses discussions about values in technology at disciplinary boundaries. In the service of improving this situation, this paper examines the value of shared vocabularies, analytics, and other tools that facilitate conversations about values in light of these disciplinary specific conceptualizations, the role such tools play in furthering research and practice, outlines different conceptions of “fairness”deployed in discussions about computer systems, and provides an analytic tool for interdisciplinary discussions and collaborations around the concept of fairness. We use a case study of risk assessments in criminal justice applications to both motivate our effort–describing how conflation of different concepts under the banner of “fairness” led to unproductive confusion–and illustrate the value of the fairness analytic by demonstrating how the rigorous analysis it enables can assist in identifying key areas of theoretical, political, and practical misunderstanding or disagreement, and where desired support alignment or collaboration in the absence of consensus.

Explainable Machine Learning in Deployment, Umang Bhatt et. al., September 13, 2019

Explainable machine learning seeks to provide various stakeholders with insights into model behavior via feature importance scores, counterfactual explanations, and influential samples, among other techniques. Recent advances in this line of work, however, have gone without surveys of how organizations are using these techniques in practice. This study explores how organizations view and use explainability for stakeholder consumption. We find that the majority of deployments are not for end users affected by the model but for machine learning engineers, who use explainability to debug the model itself. There is a gap between explainability in practice and the goal of public transparency, since explanations primarily serve internal stakeholders rather than external ones. Our study synthesizes the limitations with current explainability techniques that hamper their use for end users. To facilitate end user interaction, we develop a framework for establishing clear goals for explainability, including a focus on normative desiderata.

Recommendation of the Council on Artificial Intelligence, OECD, May 21, 2019

The Recommendation on Artificial Intelligence (AI) – the first intergovernmental standard on AI – was adopted by the OECD Council at Ministerial level on 22 May 2019 on the proposal of the Committee on Digital Economy Policy (CDEP). The Recommendation aims to foster innovation and trust in AI by promoting the responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values. Complementing existing OECD standards in areas such as privacy, digital security risk management, and responsible business conduct, the Recommendation focuses on AI-specific issues and sets a standard that is implementable and sufficiently flexible to stand the test of time in this rapidly evolving field.

Affinity Profiling and Discrimination by Association in Online Behavioural Advertising, Sandra Wachter, May 15, 2019

Affinity profiling – grouping people according to their assumed interests rather than solely their personal traits – has become commonplace in the online advertising industry. Online platform providers use behavioural advertisement (OBA) and can infer very sensitive information (e.g. ethnicity, gender, sexual orientation, religious beliefs) about individuals to target or exclude certain groups from products and services, or to offer different prices. OBA and affinity profiling raise at least three distinct legal challenges: privacy, non-discrimination, and group level protection. Current regulatory frameworks may be ill-equipped to sufficiently protect against all three harms. I first examine several shortfalls of the General Data Protection Regulation (GDPR) concerning governance of sensitive inferences and profiling. I then show the gaps of EU non-discrimination law in relation to affinity profiling in terms of its areas of application (i.e. employment, welfare, goods and services) and the types of attributes and people it protects. I propose that applying the concept of ‘discrimination by association’ can help close some of these gaps in legal protection against OBA. This concept challenges the idea of strictly differentiating between assumed interests and personal traits when profiling people. Failing to acknowledge the potential relationship – be it directly or indirectly – between assumed interests and personal traits could render non-discrimination ineffective. Discrimination by association occurs when a person is treated significantly worse than others (e.g. not being shown an advertisement) based on their relationship or association (e.g. assumed gender or affinity) with a protected group. Crucially, the individual does not need to be a member of the protected group to receive protection. Protection does not hinge on whether the measure taken is based on a protected attribute that an individual actually possesses, or on their mere association with a protected group. Discrimination by association would help to overcome the argument that inferring one’s ‘affinity for’ and ‘membership in’ a protected group are strictly unrelated. Not needing to be a part of the protected group, as I will argue, also negates the need for people who are part of the protected group to ‘out’ themselves as members of the group (e.g. sexual orientation, religion) to receive protection, if they prefer. Finally, individuals who have been discriminated against but are not actually members of the protected group (e.g. people who have been misclassified as women) could also bring a claim. Even if these gaps are closed, challenges remain. The lack of transparent business models and practices could pose a considerable barrier to prove non-discrimination cases. Finally, inferential analytics and AI expand the circle of potential victims of undesirable treatment in this context by grouping people according to inferred or correlated similarities and characteristics. These new groups are not accounted for in data protection and non-discrimination law.

A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, Sandra Wachter & Brent Mittelstadt, May 1, 2019

Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice (ECJ). This Article shows that individuals are granted little control or oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively “economy class” personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Articles 13–15), rectify (Article 16), delete (Article 17), object to (Article 21), or port (Article 20) personal data are significantly curtailed for inferences. The GDPR also provides insufficient protection against sensitive inferences (Article 9) or remedies to challenge inferences or important decisions based on them (Article 22(3)). This situation is not accidental. In standing jurisprudence the ECJ has consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent. Current policy proposals addressing privacy protection (the ePrivacy Regulation and the EU Digital Content Directive) and Europe’s new Copyright Directive and Trade Secrets Directive also fail to close the GDPR’s accountability gaps concerning inferences. This Article argues that a new data protection right, the “right to reasonable inferences,” is needed to help close the accountability gap currently posed by “high risk inferences,” meaning inferences drawn from Big Data analytics that damage privacy or reputation, or have low verifiability in the sense of being predictive or opinion-based while being used in important decisions. This right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data form a normatively acceptable basis from which to draw inferences; (2) why these inferences are relevant and normatively acceptable for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged.

Gender, Race, and Power in AI: A Playlist, AI Now Institute, April 17, 2019

(Contains many books/articles to add to our wiki, some of which might already be listed on our wiki)Gender, Race, and Power in AI is the product of a year-long survey of literature at the nexus of gender, race, and power in the field of artificial intelligence. [The] study surfaced some astonishing gaps, but it also made clear that scholars of diverse gender and racial backgrounds have been sounding the alarm about inequity and discrimination in artificial intelligence for decades.

The Machine Learning Reproducibility Checklist, McGill University, March 27, 2019

A checklist to ensure that a machine learning result is reproducible.

Towards Federated Learning at Scale: System Design, Keith Bonawitz et al., March 22, 2019

Federated Learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data. We have built a scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow. In this paper, we describe the resulting high-level design, sketch some of the challenges and their solutions, and touch upon the open problems and future directions.

Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability, Margot E. Kaminski, March 12, 2019

Algorithms are now used to make significant decisions about individuals, from credit determinations to hiring and firing. But they are largely unregulated under U.S. law. A quickly growing literature has split on how to address algorithmic decision-making, with individual rights and accountability to nonexpert stakeholders and to the public at the crux of the debate. In this Article, I make the case for why both individual rights and public- and stakeholder-facing accountability are not just goods in and of themselves but crucial components of effective governance. Only individual rights can fully address dignitary and justificatory concerns behind calls for regulating algorithmic decision-making. And without some form of public and stakeholder accountability, collaborative public-private approaches to systemic governance of algorithms will fail.

Forbes Insights AI Issue 05: AI & Ethics , Forbes and Intel, March 2019

With AI taking off, the need for data is greater than ever. Much of it comes from consumers. So how do society, industry, and government balance this voracious need for data with the protections that consumers are demanding? Can legal structures help to manage the inherent conflict between AI and privacy? And what is privacy in 2019, anyway? [Additionally,] as companies increasingly rely on machine learning solutions to inform key decisions, they’re running into ethical dilemmas. Solving them will require a complex and long-term conversation, but there are effective steps that enlightened companies can take right now.

Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning, Megha Srivastava & Hoda Heidari, February 19, 2019

Fairness for Machine Learning has received considerable attention, recently. Various mathematical formulations of fairness have been proposed, and it has been shown that it is impossible to satisfy all of them simultaneously. The literature so far has dealt with these impossibility results by quantifying the tradeoffs between different formulations of fairness. Our work takes a different perspective on this issue. Rather than requiring all notions of fairness to (partially) hold at the same time, we ask which one of them is the most appropriate given the societal domain in which the decision-making model is to be deployed. We take a descriptive approach and set out to identify the notion of fairness that best captures lay people’s perception of fairness. We run adaptive experiments designed to pinpoint the most compatible notion of fairness with each participant’s choices through a small number of tests. Perhaps surprisingly, we find that the most simplistic mathematical definition of fairness—namely, demographic parity—most closely matches people’s idea of fairness in two distinct application scenarios. This remains the case even when we explicitly tell the participants about the alternative, more complicated definitions of fairness and we reduce the cognitive burden of evaluating those notions for them. Our findings have important implications for the Fair ML literature and the discourse on formalizing algorithmic fairness.

Discrimination in the Age of Algorithms, Jon Kleinberg, et al., February 12, 2019

The law forbids discrimination. But the ambiguity of human decision-making often makes it extraordinarily hard for the legal system to know whether anyone has actually discriminated. To understand how algorithms affect discrimination, we must therefore also understand how they affect the problem of detecting discrimination. By one measure, algorithms are fundamentally opaque, not just cognitively but even mathematically. Yet for the task of proving discrimination, processes involving algorithms can provide crucial forms of transparency that are otherwise unavailable. These benefits do not happen automatically. But with appropriate requirements in place, the use of algorithms will make it possible to more easily examine and interrogate the entire decision process, thereby making it far easier to know whether discrimination has occurred. By forcing a new level of specificity, the use of algorithms also highlights, and makes transparent, central tradeoffs among competing values. Algorithms are not only a threat to be regulated; with the right safeguards in place, they have the potential to be a positive force for equity.

Differential Privacy: A Primer for a Non-Technical Audience, Alexandra Wood, et al., February 2019

Differential privacy is a formal mathematical framework for quantifying and managing privacy risks. It provides provable privacy protection against a wide range of potential attacks, including those currently unforeseen. Differential privacy is primarily studied in the context of the collection, analysis, and release of aggregate statistics. These range from simple statistical estimations, such as averages, to machine learning. Tools for differentially private analysis are now in early stages of implementation and use across a variety of academic, industry, and government settings. Interest in the concept is growing among potential users of the tools, as well as within legal and policy communities, as it holds promise as a potential approach to satisfying legal requirements for privacy protection when handling personal information. In particular, differential privacy may be seen as a technical solution for analyzing and sharing data while protecting the privacy of individuals in accordance with existing legal or policy requirements for de-identification or disclosure limitation. This primer seeks to introduce the concept of differential privacy and its privacy implications to non-technical audiences. It provides a simplified and informal, but mathematically accurate, description of differential privacy. Using intuitive illustrations and limited mathematical formalism, it discusses the definition of differential privacy, how differential privacy addresses privacy risks, how differentially private analyses are constructed, and how such analyses can be used in practice. A series of illustrations is used to show how practitioners and policymakers can conceptualize the guarantees provided by differential privacy. These illustrations are also used to explain related concepts, such as composition (the accumulation of risk across multiple analyses), privacy loss parameters, and privacy budgets. This primer aims to provide a foundation that can guide future decisions when analyzing and sharing statistical data about individuals, informing individuals about the privacy protection they will be afforded, and designing policies and regulations for robust privacy protection.

2018

AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, Luciano Floridi, et al., November 26, 2018

This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.

Deus ex Machina: Regulating Cybersecurity and Artificial Intelligence for Patients of the Future, Charlotte Tshider, November 15, 2018

Translated as “god from a machine,” the deus ex machina mechanically suspended Greek gods above a theatre stage to resolve plot issues by divine intervention. The implement’s name has since extended to the rapidly expanding Artificial Intelligence (AI) field, as AI has similarly promised miraculous resolution to any number of human challenges, including chronic health conditions. Modern medical devices and other health applications use AI to automate computer functionality, including Internet-connected medical devices, ubiquitous mobile device use, and individual self-monitoring consumer health devices. Unlike human algorithmic programming of the past, AI enables more powerful and automated dynamic algorithmic calculation, or automation, which surpass human data science in accuracy. AI has the potential to revolutionize modern medicine, yet exceptionally large data volumes coupled with automated functionality and Internet connectivity will likely introduce previously unanticipated device safety issues. Although businesses increasingly integrate AI services into medical devices worldwide, the United States (U.S.) and the European Union (EU), typically global leaders in regulating health and medical device technology, have not established legal frameworks to adequately addresses big data, cybersecurity, and AI risks. With new technology, the principles informing medical device risk management activities no longer effectively manage patient safety risks. The U.S. and the EU must consider alternative strategies for regulating medical devices that adequately anticipate cybersecurity and AI risks.

A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, Sandra Wachter & Brent Mittelstadt, October 14, 2018

In this paper we argue that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed ‘high risk inferences’ , meaning inferences that are privacy invasive or reputation damaging and have low verifiability in the sense of being predictive or opinion-based. In cases where algorithms draw ‘high risk inferences’ about individuals, this right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data is a relevant basis to draw inferences; (2) why these inferences are relevant for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged. A right to reasonable inferences must, however, be reconciled with EU jurisprudence and counterbalanced with IP and trade secrets law as well as freedom of expression and Article 16 of the EU Charter of Fundamental Rights: the freedom to conduct a business.

Differentially-Private “Draw and Discard” Machine Learning, Vasyl Pihur et al., October 10, 2018

In this work, we propose a novel framework for privacy-preserving client-distributed machine learning. It is motivated by the desire to achieve differential privacy guarantees in the local model of privacy in a way that satisfies all systems constraints using asynchronous client-server communication and provides attractive model learning properties. We call it “Draw and Discard” because it relies on random sampling of models for load distribution (scalability), which also provides additional server-side privacy protections and improved model quality through averaging. We present the mechanics of client and server components of “Draw and Discard” and demonstrate how the framework can be applied to learning Generalized Linear models. We then analyze the privacy guarantees provided by our approach against several types of adversaries and showcase experimental results that provide evidence for the framework’s viability in practical deployments. We believe our framework is the first deployed distributed machine learning approach that operates in the local privacy model.

Model Cards for Model Reporting, Margaret Mitchell, et al., October 5, 2018

Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting.

Artificial Intelligence & Human Rights: Opportunities & Risks, Filippo A. Raso, et al., September 25, 2018

This report explores the human rights impacts of artificial intelligence (“AI”) technologies. It highlights the risks that AI, algorithms, machine learning, and related technologies may pose to human rights, while also recognizing the opportunities these technologies present to enhance the enjoyment of the rights enshrined in the Universal Declaration of Human Rights (“UDHR”). The report draws heavily on the United Nations Guiding Principles on Busi- ness and Human Rights (“Guiding Principles”) to propose a framework for identifying, mitigating, and remedying the human rights risks posed by AI.

Algorithms that Remember: Model Inversion Attacks and Data Protection Law, Michael Veale, Reuben Bins & Lilian Edwards, August 6, 2018

Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU’s recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around ‘model inversion’ and ‘membership inference’ attacks, which indicate that the process of turning training data into machine learned systems is not one-way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation.

The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning, Sam Corbett-Davies & Sharad Goel, July 31, 2018

The nascent field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last several years, three formal definitions of fairness have gained prominence: (1) anti-classification, (2) classification parity, and (3) calibration. Here we show that all three of these fairness definitions suffer from significant statistical limitations. In contrast to these formal fairness criteria, we argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. Such a strategy, while not universally applicable, often aligns well with policy objectives; notably, this strategy will typically violate both anti-classification and classification parity. In practice, it requires significant effort to construct suitable risk estimates. One must carefully define and measure the targets of prediction to avoid retrenching biases in the data. But, importantly, one cannot generally address these difficulties by requiring that algorithms satisfy popular mathematical formalizations of fairness. By highlighting these challenges in the foundation of fair machine learning, we hope to help researchers and practitioners productively advance the area.

Troubling Trends in Machine Learning Scholarship, Zachary C. Lipton et al., July 26, 2018

Collectively, machine learning (ML) researchers are engaged in the creation and dissemination of knowledge about data-driven algorithms. In a given paper, researchers might aspire to any subset of the following goals, among others: to theoretically characterize what is learnable, to obtain understanding through empirically rigorous experiments, or to build a working system that has high predictive accuracy. While determining which knowledge warrants inquiry may be subjective, once the topic is fixed, papers are most valuable to the community when they act in service of the reader, creating foundational knowledge and communicating as clearly as possible.Recent progress in machine learning comes despite frequent departures from these ideals. In this paper, we focus on the following four patterns that appear to us to be trending in ML scholarship: (i) failure to distinguish between explanation and speculation; (ii) failure to identify the sources of empirical gains, e.g., emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning; (iii) mathiness: the use of mathematics that obfuscates or impresses rather than clarifies, e.g., by confusing technical and non-technical concepts; and (iv) misuse of language, e.g., by choosing terms of art with colloquial connotations or by overloading established technical terms.While the causes behind these patterns are uncertain, possibilities include the rapid expansion of the community, the consequent thinness of the reviewer pool, and the often-misaligned incentives between scholarship and short-term measures of success (e.g., bibliometrics, attention, and entrepreneurial opportunity). While each pattern offers a corresponding remedy (don’t do it), we also discuss some speculative suggestions for how the community might combat these trends.

Model Reconstruction from Model Explanations, Milli et al. July 13, 2018

We show through theory and experiment that gradient-based explanations of a model quickly reveal the model itself. Our results speak to a tension between the desire to keep a proprietary model secret and the ability to offer model explanations.On the theoretical side, we give an algorithm that provably learns a two-layer ReLU network in a setting where the algorithm may query the gradient of the model with respect to chosen inputs. The number of queries is independent of the dimension and nearly optimal in its dependence on the model size. Of interest not only from a learning-theoretic perspective, this result highlights the power of gradients rather than labels as a learning primitive.Complementing our theory, we give effective heuristics for reconstructing models from gradient explanations that are orders of magnitude more query-efficient than reconstruction attacks relying on prediction interfaces.

Conference Paper Submissions, FAT ML Conference, July 15, 2018

Fairness and Machine Learning, Solon Barocas, Moritz Hardt, & Arvind Narayanan, July 6, 2018

This book gives a perspective on machine learning that treats fairness as a central concern rather than an afterthought. [The authors] review the practice of machine learning in a way that highlights ethical challenges.

A Right to Explanation, Explained, Margot E. Kaminski, June 15, 2018

Many have called for algorithmic accountability: laws governing decision-making by complex algorithms, or AI. The EU’s General Data Protection Regulation (GDPR) now establishes exactly this. The recent debate over the right to explanation (a right to information about individual decisions made by algorithms) has obscured the significant algorithmic accountability regime established by the GDPR. The GDPR’s provisions on algorithmic accountability, which include a right to explanation, have the potential to be broader, stronger, and deeper than the preceding requirements of the Data Protection Directive. This Essay clarifies, largely for a U.S. audience, what the GDPR actually requires, incorporating recently released authoritative guidelines.

Robots Welcome? Ethical and Legal Considerations for Web Crawling and Scraping, Zachary Gold; Mark Latonero, June 7, 2018

Web crawlers are widely used software programs designed to automatically search the online universe to find and collect information. The data that crawlers provide help make sense of the vast and often chaotic nature of the Web. Crawlers find websites and content that power search engines and online marketplaces . . . . Despite the ubiquity of crawlers, their use is ambiguously regulated largely by online social norms whereby webpage headers signal whether automated “robots” are welcome to crawl their sites. As courts take on the issues raised by web crawlers, user privacy hangs in the balance…This paper discusses the history of web crawlers in courts as well as the uses of such programs by a wide array of actors. It addresses ethical and legal issues surrounding the crawling and scraping of data posted online for uses not intended by the original poster or by the website on which the information is hosted. The article further suggests that stronger rules are necessary to protect the users’ initial expectations about how their data would be used, as well as their privacy.

Securing the Health of the Internet, Scott J. Shackelford, et al., June 4, 2018

Cybersecurity, including the security of information technology (IT), is a critical requirement in ensuring society trusts, and therefore can benefit from, modern technology. Problematically, though, rarely a day goes by without a news story related to how critical data has been exposed, exfiltrated, or otherwise inappropriately used or accessed as a result of supply chain vulnerabilities. From the Russian government’s campaign to influence the 2016 U.S. presidential election to the September 2017 Equifax breach of more than 140-million Americans’ credit reports, mitigating cyber risk has become a topic of conversation in boardrooms and the White House, on Wall Street and Main Street. But oftentimes these discussions miss the problems replete in the often-expansive supply chains on which many of these products and services we depend on are built; this is particularly true in the medical device context. The problem recently made national news with the FDA-mandated recall of more than 400,000 pacemakers that were found to be vulnerable to hackers necessitating a firmware update. This Article explores the myriad vulnerabilities in the supply chain for medical devices, investigates existing FDA cybersecurity and privacy regulations to identify any potential governance gaps, and suggests a path forward to boost cybersecurity due diligence for manufacturers by making use of new approaches and technologies, including blockchain.

Fairness Definitions Explained, Sahil Verma & Julia Rubin, May 29, 2018

Algorithm fairness has started to attract the attention of researchers in AI, Software Engineering and Law communities, with more than twenty different notions of fairness proposed in the last few years. Yet, there is no clear agreement on which definition to apply in each situation. Moreover, the detailed differences between multiple definitions are difficult to grasp. To address this issue, this paper collects the most prominent definitions of fairness for the algorithmic classification problem, explains the rationale behind these definitions, and demonstrates each of them on a single unifying case-study. Our analysis intuitively explains why the same case can be considered fair according to some definitions and unfair according to others.

Prioritizing Privacy in the Courts and Beyond, Babette Boliek, May 17, 2018

Big data has affected American life and business in a variety of ways — inspiring both technological development and industrial change. However, the legal protection for a legal person’s right to his or her own personal information has not matched the growth in the collection and aggregation of data. These legal shortcomings are exacerbated when third party privacy interests are at stake in litigation. Judicial orders to compel sensitive data are expressly permitted even under the few privacy statutes that may control data transfers. Historically, the Federal Rules of Civil Procedure favor generous disclosure of information. But as litigation becomes more technical and data collection and transfer costs are decreasing, this Article argues that the judiciary must take an invigorated role in discovery — in particular when ill protected, third party privacy interests are at stake.

The Intuitive Appeal of Explainable Machines, Andrew Selbst & Solon Barocas, March 2, 2018

Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible.In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, Miles Brundage, et al., February 2018

Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.

The Grand Challenges of Science Robotics, Guang-Zhong Yang, et al., January 31, 2018

One of the ambitions of Science Robotics is to deeply root robotics research in science while developing novel robotic platforms that will enable new scientific discoveries. Of our 10 grand challenges, the first 7 represent underpinning technologies that have a wider impact on all application areas of robotics. For the next two challenges, we have included social robotics and medical robotics as application-specific areas of development to highlight the substantial societal and health impacts that they will bring. Finally, the last challenge is related to responsible innovation and how ethics and security should be carefully considered as we develop the technology further.

Artificial Intelligence and Consumer Privacy, Ginger Zhe Jin, January 29, 2018

Thanks to big data, artificial intelligence (AI) has spurred exciting innovations. In the meantime, AI and big data are reshaping the risk in consumer privacy and data security. In this essay, I first define the nature of the problem and then present a few facts about the ongoing risk. The bulk of the essay describes how the U.S. market copes with the risk in current policy environment. It concludes with key challenges facing researchers and policy makers.

The Future Computed: Artificial Intelligence and Its Role in Society, Microsoft, January 17, 2018

Beyond our personal lives, AI will enable breakthrough advances in areas like healthcare, agriculture, education and transportation. It’s already happening in impressive ways. But as we’ve witnessed over the past 20 years, new technology also inevitably raises complex questions and broad societal concerns. As we look to a future powered by a partnership between computers and humans, it’s important that we address these challenges head on. How do we ensure that AI is designed and used responsibly? How do we establish ethical principles to protect people? How should we govern its use? And how will AI impact employment and jobs? To answer these tough questions, technologists will need to work closely with government, academia, business, civil society and other stakeholders. At Microsoft, we’ve identified six ethical principles – fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability – to guide the cross-disciplinary development and use of artificial intelligence. The better we understand these or similar issues — and the more technology developers and users can share best practices to address them — the better served the world will be as we contemplate societal rules to govern AI. We must also pay attention to AI’s impact on workers. What jobs will AI eliminate? What jobs will it create? If there has been one constant over 250 years of technological change, it has been the ongoing impact of technology on jobs — the creation of new jobs, the elimination of existing jobs and the evolution of job tasks and content. This too is certain to continue.

Artificial Intelligence and Privacy, Datatilsynet, The Norwegian Data Protection Authority, January 1, 2018

The Norwegian Data Protection Authority (DPA) believes it to be imperative that we further our knowledge about the privacy implications of artificial intelligence and discuss them, not only in order to safeguard the right to privacy of the individual, but also to meet the requirements of society at large.

Rewriting the “Book of the Machine”: Regulatory and Liability Issues for the Internet of Things, Jane E. Kirtley & Scott Memmel, January 1, 2018

With the Internet of Things (IoT) rapidly growing, this article discusses the security and privacy concerns related to IoT. Privacy issues are a growing concern as consumer use of IoT considers to grow. In addition, the article examines regulatory behavior in the United States compared with actions taken by the European Union. The article concludes by suggesting how liability can be assigned in the likely event that personal data is stolen from IoT.

Bridging the Gap Between Computer Science and Legal Approaches to Privacy, Kobbi Nissim et al., 2018

The analysis and release of statistical data about individuals and groups of individuals carries inherent privacy risks, and these risks have been conceptualized in different ways within the fields of law and computer science. For instance, many information privacy laws adopt notions of privacy risk that are sector- or context-specific, such as in the case of laws that protect from disclosure certain types of information contained within health, educational, or financial records. In addition, many privacy laws refer to specific techniques, such as deidentification, that are designed to address a subset of possible attacks on privacy. In doing so, many legal standards for privacy protection rely on individual organizations to make case-by-case determinations regarding concepts such as the identifiability of the types of information they hold. These regulatory approaches are intended to be flexible, allowing organizations to (1) implement a variety of specific privacy measures that are appropriate given their varying institutional policies and needs, (2) adapt to evolving best practices, and (3) address a range of privacy-related harms. However, in the absence of clear thresholds and detailed guidance on making case-specific determinations, flexibility in the interpretation and application of such standards also creates uncertainty for practitioners and often results in ad hoc, heuristic processes. This uncertainty may pose a barrier to the adoption of new technologies that depend on unambiguous privacy requirements. It can also lead organizations to implement measures that fall short of protecting against the full range of data privacy risks.

2017

Runaway Feedback Loops in Predictive Policing, Danielle Ensign et al., December 22, 2017

Predictive policing systems are increasingly used to determine how to allocate police across a city in order to best prevent crime. Discovered crime data (e.g., arrest counts) are used to help update the model, and the process is repeated. Such systems have been empirically shown to be susceptible to runaway feedback loops, where police are repeatedly sent back to the same neighborhoods regardless of the true crime rate. In response, we develop a mathematical model of predictive policing that proves why this feedback loop occurs, show empirically that this model exhibits such problems, and demonstrate how to change the inputs to a predictive policing system (in a black-box manner) so the runaway feedback loop does not occur, allowing the true crime rate to be learned. Our results are quantitative: we can establish a link (in our model) between the degree to which runaway feedback causes problems and the disparity in crime rates between areas. Moreover, we can also demonstrate the way in which reported incidents of crime (those reported by residents) and discovered incidents of crime (i.e. those directly observed by police officers dispatched as a result of the predictive policing algorithm) interact: in brief, while reported incidents can attenuate the degree of runaway feedback, they cannot entirely remove it without the interventions we suggest.

Emergent AI, Social Robots, and the Law: Security, Privacy and Policy Issues, Ramesh Subramanian, December 17, 2017

The rapid growth of AI systems has implications on a wide variety of fields. It can prove to be a boon to disparate fields such as healthcare, education, global logistics and transportation, to name a few. However, these systems will also bring forth far-reaching changes in employment, economy and security. As AI systems gain acceptance and become more commonplace, certain critical questions arise: What are the legal and security ramifications of the use of these new technologies? Who can use them, and under what circumstances? What is the safety of these systems? Should their commercialization be regulated? What are the privacy issues associated with the use of these technologies? What are the ethical considerations? Who has responsibility for the large amounts of data that is collected and manipulated by these systems? Could these systems fail? What is the recourse if there is a system failure? These questions are but a small subset of possible questions in this key emerging field. In this paper, we focus primarily on the legal questions that relate to the security, privacy, ethical, and policy considerations that emerge from one of these types of technologies, namely social robots. We begin with a history of the field, then go deeper into legal issues, the associated issues of security, privacy and ethics, and consider some solutions to these issues. Finally, we conclude with a look at the future as well as a modest proposal for future research addressing some of the challenges listed.

Playing with the Data: What Legal Scholars Should Learn About Machine Learning, David Lehr & Paul Ohm, December 2017

Legal scholars have begun to focus intently on machine learning – the name for a large family of techniques used for sophisticated new forms of data analysis that are becoming key tools of prediction and decision-making. We think this burgeoning scholarship has tended to treat machine learning too much as a monolith and an abstraction, largely ignoring some of its most consequential stages. As a result, many potential harms and benefits of automated decision-making have not yet been articulated, and policy solutions for addressing those impacts remain underdeveloped.To fill these gaps in legal scholarship, in this Article: we provide a rich breakdown of the process of machine learning. We divide this process roughly into eight steps: problem definition, data collection, data cleaning, summary statistics review, data partitioning, model selection, model training, and model deployment. Far from a straight linear path, most machine learning dances back and forth across these steps, whirling through successive passes of model building and refinement.

Accountability of AI Under the Law: The Role of Explanation, Finale Doshi-Velez, Mason Kortz, November 21, 2017

The ubiquity of systems using artificial intelligence or “AI” has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before – applications range from clinical decision support to autonomous driving and predictive policing. That said, our AIs continue to lag in common sense reasoning [McCarthy, 1960], and thus there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014]. How can we take advantage of what AI systems have to offer, while also holding them accountable? In this work, we focus on one tool: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017a], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems. We briefly review current societal, moral, and legal norms around explanation, and then focus on the different contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exist important consistencies: when demanding explanation from humans, what we typically want to know is whether and how certain input factors affected the final decision or outcome. These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should generally be technically feasible but may sometimes be practically onerous – there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard.

Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation, Gianclaudio Malgieri & Giovanni Comande, November 13, 2017

The aim of this contribution is to analyse the real borderlines of the ‘right to explanation’ in the GDPR and to discretely distinguish between different levels of information and of consumers’ awareness in the ‘black box society’. In order to combine transparency and comprehensibility we propose the new concept of algorithm ‘legibility’. We argue that a systemic interpretation is needed in this field, since it can be beneficial not only for individuals but also for businesses. This may be an opportunity for auditing algorithms and correcting unknown machine biases, thus similarly enhancing the quality of decision-making outputs. Accordingly, we show how a systemic interpretation of articles 13-15 and 22 GDPR is necessary . . . . In addition, we recommend a ‘legibility test’ that data controllers should perform in order to comply with the duty to provide meaningful information about the logic involved in an automated decision-making.

Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, Sandra Wachter, Brent Mittelstadt & Chris Russell, November 1, 2017

There has been much discussion of the right to explanation in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the black box of algorithmic decision-making faces major legal and technical barriers . . . . [With these barriers in mind], [l]ooking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support. From the perspective of individuals affected by automated decision-making, we propose three aims for explanations: (1) to inform and help the individual understand why a particular decision was reached, (2) to provide grounds to contest the decision if the outcome is undesired, and (3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model. We assess how each of these goals finds support in the GDPR. We suggest data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims. These counterfactual explanations describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system.

Practical Secure Aggregation for Privacy-Preserving Machine Learning, Keith Bonowitz et al., October 30, 2017

We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user’s individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers 1.73× communication expansion for 2 10 users and 2 20-dimensional vectors, and 1.98× expansion for 2 14 users and 2 24-dimensional vectors over sending data in the clear.

Artificial Intelligence: The Public Policy, Intel, October 18, 2017

Intel powers the cloud and billions of smart, connected computing devices. Due to the decreasing cost of computing enabled by Moore’s Law1 and the increasing availability of connectivity, these connected devices are now generating millions of terabytes of data every day. Recent breakthroughs in computer and data science give us the ability to timely analyze and derive immense value from that data. As Intel distributes the computing capability of the data center across the entire global network, the impact of artificial intelligence is significantly increasing. Artificial intelligence is creating an opportunity to drive a new wave of economic progress while solving some of the world’s most difficult problems. This is the artificial intelligence (AI) opportunity. To allow AI to realize its potential, governments need to create a public policy environment that fosters AI innovation, while also mitigating unintended societal consequences. This document presents Intel’s AI public policy recommendations.

The Other Question: Can and Should Robots Have Rights?, David J. Gunkel, October 17, 2017

This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. Second, capitalizing on this verbal distinction, it is possible to identify four modalities concerning social robots and the question of rights. The second section will identify and critically assess these four modalities as they have been deployed and developed in the current literature. Finally, we will conclude by proposing another alternative, a way of thinking otherwise that effectively challenges the existing rules of the game and provides for other ways of theorizing moral standing that can scale to the unique challenges and opportunities that are confronted in the face of social robots.

Algorithmic Transparency for the Smart City, Robert Brauneis & Ellen Goodman, September 21, 2017

Emerging across many disciplines are questions about algorithmic ethics – about the values embedded in artificial intelligence and big data analytics that increasingly replace human decision-making. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. An individual can be denied parole or denied credit, fired or not hired for reasons she will never know and cannot be articulated. In the public sector, the opacity of algorithmic decision-making is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability. Investigative journalists have recently exposed the dangerous impenetrability of algorithmic processes used in the criminal justice field – dangerous because the predictions they make can be both erroneous and unfair, with none the wiser. We set out to test the limits of transparency around governmental deployment of big data analytics, focusing our investigation on local and state government use of predictive algorithms. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. And it is here that stretched agencies are most likely to hand over the analytics to private vendors, which may make design and policy choices out of the sight of the client agencies, the public, or both. To see just how impenetrable the resulting “black box” algorithms are, we filed 42 open records requests in 23 states seeking essential information about six predictive algorithm programs. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector partnerships. The goal was to see if, using the open records process, we could discover what policy judgments these algorithms embody, and could evaluate their utility and fairness. To do this work, we identified what meaningful “algorithmic transparency” entails. We found that in almost every case, it wasn’t provided. Over-broad assertions of trade secrecy were a problem. But contrary to conventional wisdom, they were not the biggest obstacle. It will not usually be necessary to release the code used to execute predictive models to dramatically increase transparency. We conclude that publicly-deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation; (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm; and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. We present what we believe are eight principal types of information that records concerning publicly implemented algorithms should contain.

AI, Ethics, and Enhanced Data Stewardship, Information Accountability Foundation, September 20, 2017

The terms data ethics and ethical processing are in vogue. The popularity of these concepts stems from the rapid growth of innovative data-driven technologies and the application of these innovations to areas that can have a material impact on people’s daily lives. The sheer volume of data that is observable and where inferences can be made as the product of analytics has and will continue to impact many facets of people’s lives, including new health solutions, business models, personalization for individuals and tangible benefits for society. Yet those same data and technologies can have an inappropriate impact and even harm on individuals and groups of individuals and cause negative impact on societal goals and values. An evolved form of accountability, ethical processing, applicable to advanced analytics, is needed to help enable the realization of the benefits of this use of data but address any resulting risks. The Information Accountability Foundation (IAF) has established an Artificial Intelligence (AI) and Ethics Project to tackle these issues. The Project’s objective is to begin the global discussion of how organisations might address the application of ethical data processing to new technologies. The IAF thinks this work is particularly necessary where data enabled decisions are made without the intervention of people. In these circumstances, corporate governance takes on added importance and ethical objectives need to be built into data processing architecture. The IAF further believes the governance structures being suggested are also applicable where data from observational technologies, such as sensors, inferences from analytics, and data synthesized from other data sets, are used to drive advanced analytics.

Artificial Intelligence and Public Policy, Adam D. Thierer, Andrea Castillo, Raymond Russell, August 22, 2017

There is growing interest in the market potential of artificial intelligence (AI) technologies and applications as well as in the potential risks that these technologies might pose. As a result, questions are being raised about the legal and regulatory governance of AI, machine learning, “autonomous” systems, and related robotic and data technologies. Fearing concerns about labor market effects, social inequality, and even physical harm, some have called for precautionary regulations that could have the effect of limiting AI development and deployment. In this paper, we recommend a different policy framework for AI technologies. At this nascent stage of AI technology development, we think a better case can be made for prudence, patience, and a continuing embrace of “permissionless innovation” as it pertains to modern digital technologies. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated, and problems, if they develop at all, can be addressed later.

The Inadequate, Invaluable Fair Information Practices, Woordrow Hartzog, August 15, 2017

A sea change is afoot in the relationship between privacy and technology. FIPs-based regimes were relatively well-equipped for the first wave of personal computing. But automated technologies and exponentially greater amounts of data have pushed FIPs principles like data minimization, transparency, choice, and access to the limit. Advances in robotics, genetics, biometrics, and algorithmic decision-making are challenging the idea that rules meant to ensure fair aggregation of personal information in databases are sufficient. Control over information in databases isn’t even the half of it anymore. The mass connectivity of the “Internet of Things” and near ubiquity of mobile devices make the security and surveillance risks presented by the isolated computer terminals and random CCTV cameras of the “80s and “90s seem quaint. But we’ve come too far with the FIPs to turn back now. The FIPs model of privacy regulation has been adopted by nearly every country in the world that has decided to take data protection seriously. Normatively, the FIPs have been with us so long that in many ways they have become synonymous with privacy. At this point, abandoning the FIPs is out of the question. Even tinkering with them requires true urgency and a good plan. But modern privacy problems require more than just the FIPs. Hence, the pickle.

Artificial Intelligence Policy: A Primer and Roadmap, Ryan Calo, August 9, 2017

Talk of artificial intelligence is everywhere. People marvel at the capacity of machines to translate any language and master any game. Others condemn the use of secret algorithms to sentence criminal defendants or recoil at the prospect of machines gunning for blue, pink, and white-collar jobs. Some worry aloud that artificial intelligence will be humankind’s “final invention.” This essay, prepared in connection with UC Davis Law Review’s 50th anniversary symposium, explains why AI is suddenly on everyone’s mind and provides a roadmap to the major policy questions AI raises. The essay is designed to help policymakers, investors, technologists, scholars, and students understand the contemporary policy environment around AI at least well enough to initiate their own exploration.Topics covered include:• Justice and equity • Use of force• Safety and certification• Privacy (including data parity); and• Taxation and displacement of laborIn addition to these topics, the essay will touch briefly on a selection of broader systemic questions:• Institutional configuration and expertise• Investment and procurement• Removing hurdles to accountability; and• Correcting mental models of AI

Averting Robot Eyes, Martgot E. Kaminski, Matthew Reuben, William D. Smart, Condy M. Grimm, July 19, 2017

Home robots will cause privacy harms. At the same time, they can provide beneficial services – as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology.

Improving the Realism of Synthetic Images, Apple Machine Learning Journal, July 7, 2017

Training machine learning models on standard synthetic images is problematic as the images may not be realistic enough, leading the model to learn details present only in synthetic images and failing to generalize well on real images. One approach to bridge this gap between synthetic and real images would be to improve the simulator which is often expensive and difficult, and even the best rendering algorithm may still fail to model all the details present in the real images. This lack of realism may cause models to overfit to ‘unrealistic’ details in the synthetic images. Instead of modeling all the details in the simulator, could we learn them from data? To this end, we developed a method for refining synthetic images to make them look more realistic.

An exploration on artificial intelligence application: From security, privacy and ethic perspective, Xiuquan Li and Tao Zhang, June 19, 2017

Artificial intelligence is believed as a disruptive technology, which will change our economy and society significantly in the near future. It can be employed to replace human labors in completing many dangerous and tedious tasks, providing us with more convenient and efficient life. We can benefit a lot from the wide application of this emerging technology. However, there are also potential risks and threats in application of artificial intelligence, which need to be handled in a proper way before extensive usage. In the paper, we make discussions on the security, privacy and ethnic issues in artificial intelligence applications and point out the potential risks and threats. Countermeasures in research, regulation and supervision are suggested and our expectation for artificial intelligence development is given out.

Algorithmic Decision Making and the Cost of Fairness, Sam Corbett-Davies et al., June 10, 2017

Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques have recently been proposed to achieve algorithmic fairness. Here we reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. Œe unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.

Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You Are Looking For?, Lilian Edwards and Michael Veale, May 23, 2017

However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as “meaningful information about the logic of processing” — is unlikely to be provided by the kind of ML “explanations” computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However “subject-centric” explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than de-compositional explanations in dodging developers’ worries of IP or trade secrets disclosure.

Physiognomy’s New Clothes, Blaise Aguera y Arcas, Margaret Mitchell & Alexander Todorov, May 6, 2017

In an era of pervasive cameras and big data, machine-learned physiognomy can also be applied at unprecedented scale. Given society’s increasing reliance on machine learning for the automation of routine cognitive tasks, it is urgent that developers, critics, and users of artificial intelligence understand both the limits of the technology and the history of physiognomy, a set of practices and beliefs now being dressed in modern clothes. Hence, we are writing both in depth and for a wide audience: not only for researchers, engineers, journalists, and policymakers, but for anyone concerned about making sure AI technologies are a force for good.

Machine Learning: The Power and Promise of Computers that Learn by Example, Royal Society, April 25, 2017

Machine learning is a branch of artificial intelligence that allows computer systems to learn directly from examples, data, and experience. Through enabling computers to perform specific tasks intelligently, machine learning systems can carry out complex processes by learning from data, rather than following pre-programmed rules.

Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach, Corinne Cath, et al., March 28, 2017

In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favorable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: (a) the development of a ‘good AI society’; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a ‘good AI society’. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.

Fairness in Criminal Justice Risk Assessments: The State of the Art, Richard Berk, et al., March 27, 2017

Objectives: Discussions of fairness in criminal justice risk assessments typically lack conceptual precision. Rhetoric too often substitutes for careful analysis. In this paper, we seek to clarify the tradeoffs between different kinds of fairness and between fairness and accuracy. Methods: We draw on the existing literatures in criminology, computer science and statistics to provide an integrated examination of fairness and accuracy in criminal justice risk assessments. We also provide an empirical illustration using data from arraignments. Results: We show that there are at least six kinds of fairness, some of which are incompatible with one another and with accuracy. Conclusions: Except in trivial cases, it is impossible to maximize accuracy and fairness at the same time, and impossible simultaneously to satisfy all kinds of fairness. In practice, a major complication is different base rates across different legally protected groups. There is a need to consider challenging tradeoffs.

An Education Theory of Fault for Autonomous Systems, William D. Smart, Cindy Grimm & Woodrow Hartzog, March 22, 2017

We think that part of the problem with our discussion of fault is that we have yet to settle on the best approach and language to use to specifically target culpable behavior in the design and deployment for automated systems. The purpose of this paper is to offer an additional structured and nuance way of thinking about the duties and culpable behavior of all the relevant stakeholders in the creation and deployment of autonomous systems. In this article, we argue that some of the most articulable failures in the creation and deployment of unpredictable systems lie in the lack of communication, clarity, and education between the procurer, developer, and users of automated systems. In other words, while it is hard to exert meaningful “control” over automated systems to get them to act predictably, developers and procurers have great control over how much they test and articulate the limits of an automated technology to all the other relevant parties. This makes testing and education one of the most legally relevant point of failures when automated systems harm people.

Regulating Inscrutable Systems, Andrew D. Selbst & Solon Barocas, March 20, 2017

This Article takes seriously the calls for regulation via explanation to investigate how existing laws implementing such calls fare, and whether interpretability research can fix the flaws. Ultimately, it argues that while machine interpretability may make compliance with existing legal regimes easier, or possible in the first instance, a focus on explanation alone fails to fulfill the overarching normative purpose of the law, even when compliance can be achieved. The paper concludes with a call to consider where such goals would be better served by other means, including mechanisms to directly assess whether models are fair and just.

Towards Moral Autonomous Systems, Vicky Charisi, et al., March 16, 2017

Both the ethics of autonomous systems and the problems of their technical implementation have by now been studied in some detail. Less attention has been given to the areas in which these two separate concerns meet. This paper, written by both philosophers and engineers of autonomous systems, addresses a number of issues in machine ethics that are located at precisely the intersection between ethics and engineering. We first discuss different approaches towards the conceptual design of autonomous systems and their implications on the ethics implementation in such systems. Then we examine problematic areas regarding the specification and verification of ethical behavior in autonomous systems, particularly with a view towards the requirements of future legislation. We discuss transparency and accountability issues that will be crucial for any future wide deployment of autonomous systems in society. Finally we consider the, often overlooked, possibility of intentional misuse of AI systems and the possible dangers arising out of deliberately unethical design, implementation, and use of autonomous robots.

Rethinking the Fourth Amendment in the Age of Supercomputers, Artificial Intelligence, and Robots, Melanie Reid, March 16, 2017

Law enforcement currently uses cognitive computers to conduct predictive and content analytics and manage information contained in large police data files. These big data analytics and insight capabilities are more effective than using traditional investigative tools and save law enforcement time and a significant amount of financial and personnel resources. It is not farfetched to think law enforcement’s use of cognitive computing will extend to using thinking, real-time robots in the field in the not-so-distant future. IBM’s Watson currently uses its artificial intelligence to suggest medical diagnoses and treatment in the healthcare industry and assists the finance industry in improving investment decisions. IBM and similar companies already offer predictive analytics and cognitive computing programs to law enforcement for real-time intelligence and investigative purposes. This article will explore the consequences of predictive and content analytics and the future of cognitive computing, such as utilizing “robots” such as an imaginary “Officer Joe Roboto” in the law enforcement context. Would our interactions with Officer Joe Roboto trigger the same Fourth Amendment concerns and protections as those when dealing with a flesh-and-blood police officer? Are we more afraid of a “robotic” Watson, its capabilities, and lack of feeling and biases, compared to a human law enforcement officer? Assuming someday in the future we might be able to solve the physical limitations of a robot, would a “robotic” officer be preferable to a human one? What sort of limitations would we place on such technology? This article attempts to explore the ramifications of using such computers/robots in the future. Autonomous robots with artificial intelligence and the widespread use of predictive analytics are the future tools of law enforcement in a digital age, and we must come up with solutions as to how to handle the appropriate use of these tools.

Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence, Dr. Michael Guihot, Anne Matthew, and Dr. Nicolas Suzor, March 13, 2017

There is a pervading sense of unease that artificially intelligent machines will soon radically alter our lives in ways that are still unknown. Advances in AI technology are developing at an extremely rapid rate as computational power continues to grow exponentially. Even if existential concerns about AI do not materialise, there are enough concrete examples of problems associated with current applications of artificial intelligence to warrant concern about the level of control that exists over developments in AI. Some form of regulation is likely necessary to protect society from risks of harm. However, advances in regulatory capacity have not kept pace with developments in new technologies including AI. This is partly because regulation has become decentered; that is, the traditional role of public regulators such as governments commanding regulation has been dissipated and other participants including those from within the industry have taken the lead. Other contributing factors are the dwindling of resources in governments on the one hand and the increased power of technology companies on the other. These factors have left the field of AI development relatively unregulated. Whatever the reason, it is now more difficult for traditional public regulatory bodies to control the development of AI. In the vacuum, industry participants have begun to self-regulate by promoting soft law options such as codes of practice and standards. We argue that, despite the reduced authority of public regulatory agencies, the risks associated with runaway AI require regulators to begin to participate in what is largely an unregulated field. In an environment where resources are scarce, governments or public regulators must develop new ways of regulating. This paper proposes solutions to regulating the development of AI ex ante. We suggest a two-step process: first, governments can set expectations and send signals to influence participants in AI development. We adopt the term nudging to refer to this type of influencing. Second, public regulators must participate in and interact with the relevant industries. By doing this, they can gather information and knowledge about the industries, begin to assess risks and then be in a position to regulate those areas that pose most risk first. To conduct a proper risk analysis, regulators must have sufficient knowledge and understanding about the target of regulation to be able to classify various risk categories. We have proposed an initial classification based on the literature that can help to direct pressing issues for further research and a deeper understanding of the various applications of AI and the relative risks they pose.

The Undue Influence of Surveillance Technology Companies on Policing, Elizabeth E. Joh, March 10, 2018

Conventional wisdom assumes that the police are in control of their investigative tools. But with surveillance technologies, this is not always the case. Increasingly, police departments are consumers of surveillance technologies that are created, sold, and controlled by private companies. These surveillance technology companies exercise an undue influence over the police today in ways that aren’t widely acknowledged, but that have enormous consequences for civil liberties and police oversight. Three seemingly unrelated examples — stingray cellphone surveillance, body cameras, and big data software — demonstrate varieties of this undue influence. These companies act out of private self-interest, but their decisions have considerable public impact. The harms of this private influence include the distortion of Fourth Amendment law, the undermining of accountability by design, and the erosion of transparency norms. This Essay demonstrates the increasing degree to which surveillance technology vendors can guide, shape, and limit policing in ways that are not widely recognized. Any vision of increased police accountability today cannot be complete without consideration of the role surveillance technology companies play.

Regulatory Challenges of Robotics: Some Guidelines for Addressing Legal and Ethical Issues, Ronald Leenes et al., March 7, 2017

Robots are slowly, but certainly, entering people’s professional and private lives. They require the attention of regulators due to the challenges they present to existing legal frameworks and the new legal and ethical questions they raise. This paper discusses four major regulatory dilemmas in the field of robotics: how to keep up with technological advances; how to strike a balance between stimulating innovation and the protection of fundamental rights and values; whether to affirm prevalent social norms or nudge social norms in a different direction; and, how to balance effectiveness versus legitimacy in techno-regulation. The four dilemmas are each treated in the context of a particular modality of regulation: law, market, social norms, and technology as a regulatory tool; and for each, we focus on particular topics – such as liability, privacy, and autonomy – that often feature as the major issues requiring regulatory attention. The paper then highlights the role and potential of the European framework of rights and values, responsible research and innovation, smart regulation and soft law as means of dealing with the dilemmas.

Big Data, artificial intelligence, machine learning and data protection, Information Commissioner’s Office, March 2, 2017

Big data, artificial intelligence (AI) and machine learning are becoming widespread in the public and private sectors. Data is being collected from an increasing variety of sources and the analytics being applied are more and more complex. While many benefits flow from these types of processing operations, when personal data is involved there are implications for privacy and data protection. In our view though, these implications are not barriers. There are several tools and approaches that not only assist with data protection compliance but also encourage creativity, innovation, and help to ensure data quality. So it’s not big data or data protection, it’s big data and data protection. The benefits of big data, AI and machine learning will be sustained by upholding key data protection principles and safeguards.

Will Democracy Survive Big Data and Artificial Intelligence?, Dirk Helbing, et al., February 25, 2017

It can be expected that supercomputers will soon surpass human capabilities in almost all areas—somewhere between 2020 and 2060. Experts are starting to ring alarm bells. Technology visionaries, such as Elon Musk from Tesla Motors, Bill Gates from Microsoft and Apple co-founder Steve Wozniak, are warning that super-intelligence is a serious danger for humanity, possibly even more dangerous than nuclear weapons.

Algorithmic Decision Making and the Cost of Fairness, Sam Corbett-Davies, et al., January 1, 2017

The focus of this article is on the design of algorithms used for pretrial release decisions. Though a notion of algorithmic fairness is that all individuals are held to the same standard, unconstrained algorithms result in disparate treatment of minorities classified as high risk defendants. In contrast, constrained optimization seeks to maximize public safety while reducing racial disparities. The article discusses these implications of algorithmic fairness at length.

Ethical Considerations in Artificial Intelligence Courses, Emanuelle Burton, et al., January 26, 2017

The recent surge in interest in ethics in artificial intelligence may leave many educators wondering how to address moral, ethical, and philosophical issues in their AI courses. As instructors we want develop curriculum that not only prepares students to be artificial intelligence practitioners, but also to understand the moral, ethical, and philosophical impacts that artificial intelligence will have on society. In this article we provide practical case studies and links to resources for use by AI educators. We also provide concrete suggestions on how to integrate AI ethics into a general artificial intelligence course and how to teach a stand-alone artificial intelligence ethics course.

Finding a Voice, Lana Greene, January 5, 2017

Computers have got much better at translation, voice recognition and speech synthesis, says Lane Greene. But they still don’t understand the meaning of language.

Accountable Algorithms, Joshua A. Kroll, et al., January 1, 2017

We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the issues analyzing code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it discloses private information or permits tax cheats or terrorists to game the systems determining audits or security screening. The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities—subtler and more flexible than total transparency—to design decision-making algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of automated decisions, but also—in certain cases—the governance of decision-making in general. The implicit (or explicit) biases of human decision-makers can be difficult to find and root out, but we can peer into the “brain” of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterward.

Privacy by Design in Machine Learning Data Collection: A User Experience, Jonathan Vitale, et al., January 1, 2017

Designing successful user experiences that use machine learning systems is an area of increasing importance. In supervised machine learning for biometric systems, such as for face recognition, the user experience can be improved. In order to use biometric authentication systems, users are asked for their biometric information together with their personal information. In contexts where there is a frequent and large amount of users to be enrolled, the human expert assisting the data collection process is often replaced in favour of software with a step-by-step user interface. However, this may introduce limitations to the overall user experience of the system. User experience should be addressed from the very beginning, during the design process. Furthermore, data collection might also introduce privacy concerns in users and potentially lead them to not use the system. For these reasons, we propose a privacy by design approach in order to maximize the user experience of the system while reducing privacy concerns of users. To do so we suggest a novel experiment in a Human-Robot interaction setting. We investigate the effects of embodiment and transparency on privacy and user experience. We expect that embodiment would enhance the overall user experience of the system, independently from transparency, whereas we expect that transparency would reduce privacy concerns of the participants. In particular, we forecast that transparency, together with embodiment, would significantly reduce privacy considerations of participants, thus maximising the amount of personal information provided by a user.

Methodologies to Guide Ethical Research and Design, IEEE, January 1, 2017

To ensure autonomous and intelligent systems (A/IS) are aligned to benefit humanity A/IS research and design must be underpinned by ethical and legal norms as well as methods. We strongly believe that a value-based design methodology should become the essential focus for the modern A/IS organization.

2016

Seeing Without Knowing: Limitations Of The Transparency Ideal And Its Application To Algorithmic Accountability, Mike Ananny & Kate Crawford, December 13, 2016

Models for understanding and holding systems accountable have long rested upon ideals and logics of transparency. Being able to see a system is sometimes equated with being able to know how it works and govern it—a pattern that recurs in recent work about transparency and computational systems. But can “black boxes’ ever be opened, and if so, would that ever be sufficient? In this article, we critically interrogate the ideal of transparency, trace some of its roots in scientific and sociotechnical epistemological cultures, and present 10 limitations to its application. We specifically focus on the inadequacy of transparency for understanding and governing algorithmic systems and sketch an alternative typology of algorithmic accountability grounded in constructive engagements with the limitations of transparency ideals.

Recoding Privacy Law: Reflections on the Future Relationship Among Law, Technology, and Privacy, Urs Gasser, December 9, 2016

Reflecting across centuries and geographies, one common thread emerges: advancement in information and communication technologies have largely been perceived as threats to privacy and often led policymakers to seek, and consumers to demand, additional privacy safeguards in the legal and regulatory arenas.

Robotics and Artificial Intelligence, House of Commons, Science and Technology Committee, October 12, 2016

After decades of somewhat slow progress, a succession of advances have recently occurred across the fields of robotics and artificial intelligence (AI), fueled by the rise in computer processing power, the profusion of data, and the development of techniques such a ‘deep learning’. Though the capabilities of AI systems are currently narrow and specific, they are, nevertheless, starting to have transformational impacts on everyday life: from driverless cars and supercomputers that can assist doctors with medical diagnoses, to intelligent tutoring systems that can tailor lessons to meet a student’s individual cognitive needs. Such breakthroughs raise a host of social, ethical and legal questions. Our inquiry has highlighted several that require serious, ongoing consideration. These include taking steps to minimize bias being accidentally built into AI systems; ensuring that the decisions they make are transparent; and instigating methods that can verify that AI technology is operating as intended and that unwanted, or unpredictable, behaviors are not produced. While the UK is world-leading when it comes to considering the implications of AI, and is well-placed to provide global intellectual leadership on this matter, a coordinated approach is required to harness this expertise. A standing Commission on Artificial Intelligence should be established with a remit to identify principles to govern the development and application of AI, provide advice to the Government, and foster public dialogue.

Equality of Opportunity in Supervised Learning, Mortiz Hardt, et al., October 7, 2016

We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individual features. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.

The National Artificial Intelligence Research and Development Strategic Plan, National Science and Technology Council, Networking and Information Technology Research and Development Subcommittee, October 1, 2016

This National Artificial Intelligence R&D Strategic Plan establishes a set of objectives for Federally-funded AI research, both research occurring within the government as well as Federally-funded research occurring outside of government, such as in academia. The ultimate goal of this research is to produce new AI knowledge and technologies that provide a range of positive benefits to society, while minimizing the negative impacts. To achieve this goal, this AI R&D Strategic Plan identifies the following priorities for Federally-funded AI research:Strategy 1: Make long-term investments in AI research. Strategy 2: Develop effective methods for human-AI collaboration. Strategy 3: Understand and address the ethical, legal, and societal implications of AI. Strategy 4: Ensure the safety and security of AI systems. Strategy 5: Develop shared public datasets and environments for AI training and testing.Strategy 6: Measure and evaluate AI technologies through standards and benchmarks.Strategy 7: Better understand the national AI R&D workforce needs.

Preparing for the Future of Artificial Intelligence, Executive Office of the President, National Science and Technology, October 1, 2016

As a contribution toward preparing the United States for a future in which Artificial Intelligence (AI) plays a growing role, we survey the current state of AI, its existing and potential applications, and the questions that are raised for society and public policy by progress in AI. We also make recommendations for specific further actions by Federal agencies and other actors. A companion document called the National Artificial Intelligence Research and Development Strategic Plan lays out a strategic plan for Federally-funded research and development in AI.

Big Data’s Disparate Impact, Solon Barocas & Andrew D. Selbst, September 30, 2016

Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply reflect the widespread biases that persist in society at large. In still others, data mining can discover surprisingly useful regularities that are really just preexisting patterns of exclusion and inequality. Unthinking reliance on data mining can deny historically disadvantaged and vulnerable groups full participation in society. Worse still, because the resulting discrimination is almost always an unintentional emergent property of the algorithm’s use rather than a conscious choice by its programmers, it can be unusually hard to identify the source of the problem or to explain it to a court. This Essay examines these concerns through the lens of American anti discrimination law — more particularly, through Title VII’s prohibition of discrimination in employment.

The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term, Kate Crawford & Meredith Wittaker, AI Now, September 22, 2016

The AI Now 2016 Report provides an overview of the four focus areas, summarizes key insights that emerged from discussions at the Symposium, and offers high-level recommendations for stakeholders engaged in the production, use, governance, and assessment of AI in the near-term.

Inherent Trade-Offs in the Fair Determination of Risk Scores, Kleinberg, et al., September 19, 2016.

Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.

Smart Policies for Artificial Intelligence, Miles Brundage & Joanna Bryson, August 29, 2016

We argue that there already exists de facto artificial intelligence policy – a patchwork of policies impacting the field of AI’s development in myriad ways. The key question related to AI policy, then, is not whether AI should be governed at all, but how it is currently being governed, and how that governance might become more informed, integrated, effective, and anticipatory. We describe the main components of de facto AI policy and make some recommendations for how AI policy can be improved, drawing on lessons from other scientific and technological domains.

Who’s Johnny? Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy, Kate Darling, August 29, 2016

People have a tendency to project life-like qualities onto robots. As we increasingly create spaces where robotic technology interacts with humans, this inclination raises ethical questions around use and policy. A human-robot-interaction experiment conducted in our lab indicates that framing robots through anthropomorphic language (like a personified name or story) can impact how people perceive and treat a robot. This chapter explores the effects of encouraging or discouraging people to anthropomorphize robots through framing. I discuss concerns about anthropomorphizing robotic technology in certain contexts, but argue that there are also cases where encouraging anthropomorphism is desirable. Because people respond to framing, framing could help to separate these cases.

Service Robots, Care Ethics, and Design, A. van Wyndberghe, August 22, 2016

It should not be a surprise in the near future to encounter either a personal or a professional service robot in our homes and/or our work places: according to the International Federation for Robots, there will be approx 35 million service robots at work by 2018. Given that individuals will interact and even cooperate with these service robots, their design and development demand ethical attention. With this in mind I suggest the use of an approach for incorporating ethics into the design process of robots known as Care Centered Value Sensitive Design (CCVSD). Although this approach was originally and intentionally designed for the healthcare domain, the aim of this paper is to present a preliminary study of how personal and professional service robots might also be evaluated using the CCVSD approach. The normative foundations for CCVSD come from its reliance on the care ethics tradition and in particular the use of care practices for: (1) structuring the analysis and, (2) determining the values of ethical import. To apply CCVSD outside of healthcare one must show that the robot has been integrated into a care practice. Accordingly, the practice into which the robot is to be used must be assessed and shown to meet the conditions of a care practice. By investigating the foundations of the approach I hope to show why it may be applicable for service robots and further to give examples of current robot prototypes that can and cannot be evaluated using CCVSD.

Supporting Ethical Data Research: An Exploratory Study of Emerging Issues in Big Data and Technical Research, Danah Boyd, Emily F. Keller & Bonnie Tijerina, August 4, 2016

This report provides valuable insights into the current state of collaboration between librarians and computer science researchers on issues of “big data” ethics. Statements and assertions represent information provided by participants, in combination with a literature review and additional formal and informal research. This report is not meant to be conclusive or comprehensive about all data science research, as we purposefully limited the scope of our work to a narrow band of institutions and actors. Yet, our findings do offer important insights that open up challenging questions and require future exploration.

Written evidence submitted to the UK Parliamentary Select Committee on Science and Technology Inquiry on Robotics and Artificial Intelligence, A.F. Winfield, July 26, 2016

This paper was submitted in response to question 4 of the Parliamentary Science and Technology Committee Inquiry on Robotics and Artificial Intelligence* on: ‘The social, legal and ethical issues raised by developments in robotics and artificial intelligence technologies, and how they should be addressed’. The paper was drafted at the request of EPSRC and the UK Robotics and Autonomous Systems (RAS) Network, and an abridged version is incorporated into the UK RAS response to the inquiry.

Robots in American Law, Ryan Calo, February 24, 2016

This article closely examines a half century of case law involving robots—just in time for the technology itself to enter the mainstream. Most of the cases involving robots have never found their way into legal scholarship. And yet, taken collectively, these cases reveal much about the assumptions and limitations of our legal system. Robots blur the line between people and instrument, for instance, and faulty notions about robots lead jurists to questionable or contradictory results. The article generates in all nine case studies. The first set highlights the role of robots as the objects of American law. Among other issues, courts have had to decide whether robots represent something “animate” for purposes of import tariffs, whether robots can “perform” as that term is understood in the context of a state tax on performance halls, and whether a salvage team “possesses” a shipwreck it visits with an unmanned submarine.The second set of case studies focuses on robots as the subjects of judicial imagination. These examples explore the versatile, often pejorative role robots play in judicial reasoning itself. Judges need not be robots in court, for instance, or apply the law robotically. The robotic witness is not to be trusted. And people who commit crimes under the robotic control of another might avoid sanction. Together these case studies paint a nuanced picture of the way courts think about an increasingly important technology. Themes and questions emerge that illuminate the path of robotics law and test its central claims to date. The article concludes that jurists on the whole possess poor, increasingly outdated views about robots and hence will not be well positioned to address the novel challenges they continue to pose.

Regulating Healthcare Robots: Maximizing Opportunities while Minimizing Risk, Drew Simshaw, et al., February 24, 2016

This paper will focus on the issues of patient and user safety, security, and privacy, and specifically the effect of medical device regulation and data protection laws on robots in healthcare. First, it will examine the demand for robots in healthcare and assess the benefits that robots can provide. Second, it will look at the types of robots currently being used in healthcare, anticipate future innovation, and identify the key characteristics of these robots that will present regulatory issues. Third, it will examine the current regulatory framework within which these robots will operate, focusing on medical device regulation and data protection laws.

How the machine ‘thinks’: Understanding opacity in machine learning algorithms, Jenna Burrell, January 1, 2016

This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: (1) opacity as intentional corporate or state secrecy, (2) opacity as technical illiteracy, and (3) an opacity that arises from the characteristics of machine learning algorithms and the scale required to apply them usefully. The analysis in this article gets inside the algorithms themselves. I cite existing literatures in computer science, known industry practices (as they are publicly presented), and do some testing and manipulation of code as a form of lightweight code audit. I argue that recognizing the distinct forms of opacity that may be coming into play in a given application is a key to determining which of a variety of technical and non-technical solutions could help to prevent harm.

Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, Matthew U. Scherer, January 1, 2016

Artificial intelligence technology (or AI) has developed rapidly during the past decade, and the effects of the AI revolution are already being keenly felt in many sectors of the economy. A growing chorus of commentators, scientists, and entrepreneurs has expressed alarm regarding the increasing role that autonomous machines are playing in society, with some suggesting that government regulation may be necessary to reduce the public risks that AI will pose. Unfortunately, the unique features of AI and the manner in which AI can be developed present both practical and conceptual challenges for the legal system. These challenges must be confronted if the legal system is to positively impact the development of AI and ensure that aggrieved parties receive compensation when AI systems cause harm. This article will explore the public risks associated with AI and the competencies of government institutions in managing those risks. It concludes with a proposal for an indirect form of AI regulation based on differential tort liability.

2015

#SocialEthics: A guide to embedding ethics in social media research, Harry Evans, Steve Ginnis & Jamie Bartlett, November 12, 2015

One of the focuses of the Wisdom of the Crowd project is to examine the ethical landscape surrounding aggregated social media research. In spring 2015, the first publication of this ethics strand contained a review of the legal and regulatory framework for using social media in market research. This second and final report builds on these findings, presenting our conclusions from quantitative and qualitative primary research with stakeholders and social media users, and outlining our recommendations for how the research industry should look to proceed if it is to be at the forefront of using social media data in an ethical way.

Privacy Self-Management and the Consent Dilemma, Daniel Solove, October 18, 2015

The current regulatory approach for protecting privacy involves what I refer to as “privacy self-management” — the law provides people with a set of rights to enable them to decide how to weigh the costs and benefits of the collection, use, or disclosure of their information. People’s consent legitimizes nearly any form of collection, use, and disclosure of personal data. Although privacy self-management is certainly a necessary component of any regulatory regime, I contend in this Article that it is being asked to do work beyond its capabilities. Privacy self-management does not provide meaningful control. Empirical and social science research has undermined key assumptions about how people make decisions regarding their data, assumptions that underpin and legitimize the privacy self-management model. Moreover, people cannot appropriately self-manage their privacy due to a series of structural problems. There are too many entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity. Moreover, many privacy harms are the result of an aggregation of pieces of data over a period of time by different entities. It is virtually impossible for people to weigh the costs and benefits of revealing information or permitting its use or transfer without an understanding of the potential downstream uses, further limiting the effectiveness of the privacy self-management framework. In addition, privacy self-management addresses privacy in a series of isolated transactions guided by particular individuals. Privacy costs and benefits, however, are more appropriately assessed cumulatively and holistically — not merely at the individual level.In order to advance, privacy law and policy must confront a complex and confounding dilemma with consent. Consent to collection, use, and disclosure of personal data is often not meaningful, and the most apparent solution — paternalistic measures — even more directly denies people the freedom to make consensual choices about their data. In this Article, I propose several ways privacy law can grapple with the consent dilemma and move beyond relying too heavily on privacy self-management.

Data, privacy, and the greater good, Eric Horvitz & Deirdre Mulligan, July 17, 2015

Large-scale aggregate analyses of anonymized data can yield valuable results and insights that address public health challenges and provide new avenues for scientific discovery. These methods can extend our knowledge and provide new tools for enhancing health and wellbeing. However, they raise questions about how to best address potential threats to privacy while reaping benefits for individuals and society as a whole. The use of machine learning to make leaps across informational and social contexts to infer health conditions and risks from nonmedical data provides representative scenarios for reflections on directions with balancing innovation and regulation.

Unfair and Deceptive Robots, Woodrow Hartzog, May 5, 2015

Robots, like household helpers, personal digital assistants, automated cars, and personal drones are or will soon be available to consumers. These robots raise common consumer protection issues, such as fraud, privacy, data security, and risks to health, physical safety and finances. Robots also raise new consumer protection issues, or at least call into question how existing consumer protection regimes might be applied to such emerging technologies. Yet it is unclear which legal regimes should govern these robots and what consumer protection rules for robots should look like. The thesis of the Article is that the FTC’s grant of authority and existing jurisprudence make it the preferable regulatory agency for protecting consumers who buy and interact with robots. The FTC has proven to be a capable regulator of communications, organizational procedures, and design, which are the three crucial concepts for safe consumer robots. Additionally, the structure and history of the FTC shows that the agency is capable of fostering new technologies as it did with the Internet. The agency generally defers to industry standards, avoids dramatic regulatory lurches, and cooperates with other agencies. Consumer robotics is an expansive field with great potential. A light but steady response by the FTC will allow the consumer robotics industry to thrive while preserving consumer trust and keeping consumers safe from harm.

Robots in the Home: What Have we Agreed To?, Margot Kaminski, April 11, 2015

This essay begins by identifying the legally salient features of home robots: the aspects of home robots that will likely drive the most interesting legal questions. It then explores how current privacy law governing both law enforcement and private parties addresses a number of questions raised by home robots. First, how does privacy law treat entities that enter places (physically, or through sense-enhancing technologies) where they are not invited? Second, how does privacy law treat entities that are invited into a physical space, but were not invited to record in that space? How does privacy law treat consent, both express and implied? Fourth, how does privacy law address entities that lull–or deceive–people into revealing more than they intend to? And finally, in the private actor context, will robotic recording be considered to be speech?

Robotics Nudges: The Ethics of Engineering a More Socially Just Human Being, Jason Borenstein & Ron Arkin, March 4, 2015

Robots are becoming an increasingly pervasive feature of our personal lives. As a result, there is growing importance placed on examining what constitutes appropriate behavior when they interact with human beings. In this paper, we discuss whether companion robots should be permitted to “nudge” their human users in the direction of being “more ethical”. More specifically, we use Rawlsian principles of justice to illustrate how robots might nurture “socially just” tendencies in their human counterparts. Designing technological artifacts in such a way to influence human behavior is already well-established but merely because the practice is commonplace does not necessarily resolve the ethical issues associated with its implementation.

A Review of Verbal and Non-Verbal Human-Robot Interactive Communication, N. Mavridis, January 1, 2015

In this paper, an overview of human–robot interactive communication is presented, covering verbal as well as non-verbal aspects. Following a historical introduction, and motivation towards fluid human–robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human–robot communication. Then, the ten desiderata are examined in detail, culminating in a unifying discussion, and a forward-looking conclusion.