Summary of articles, reports, and updates on AI and related topics
As of May 28, 2019
TABLE OF CONTENTS:
NEWS FROM FPF
- FPF on the Hill
MONTHLY NEWS AND UPDATES
- In Government, Law, and Regulation
- AI and Machine Learning in the News
- The State of AI
- AI and Ethics
- AI and Cars
- AI and Education
- AI and Biometrics
- International
- Research, Reports, and Books
- Bonus Round
NEWS FROM FPF:
FPF on the Hill
FPF CEO Jules Polonetsky testified at the US Senate Committee on Commerce, Science, and Transportation on Wednesday, May 1, 2019. Details here. “The hearing will examine consumers’ expectations for data privacy in the Digital Age and how those expectations may vary based on the type of information collected and processed by businesses. In addition, the hearing will examine how to provide consumers with meaningful tools and resources to make more informed privacy decisions about the products and services they use both online and offline. The panel will also discuss data privacy rights, controls, and protections that should be available to consumers and enshrined into law in the United States.”
MONTHLY AI NEWS AND UPDATES:
In Government, Law, and Regulation
Next Steps On Facial Recognition, PoliticalPro, May 22, 2019 — House Oversight Chairman Elijah Cummings (D-Md.) signaled confidence after a hearing on facial recognition Wednesday that committee members will come to a bipartisan agreement on rules to limit the technology’s use. Cummings told reporters that after hosting a planned second hearing on the topic in June, he hopes to draft legislation that will “put a halt on facial recognition until there is some working out of the problems” discussed at the hearing, including concerns over privacy, discrimination and transparency. Lawmakers across the political spectrum expressed interest Wednesday in a moratorium halting government use of the software until more permanent protections and restrictions are put in place, including Freedom Caucus co-founder Rep. Jim Jordan (R-Ohio) and progressive firebrand Rep. Alexandria Ocasio-Cortez (D-N.Y.)
[A second Hearing on this topic has been announced, but not yet scheduled.]
AI/Machine Learning in the News
- The State of AI
- Mapping Global Approaches to AI Governance, Nesta, January 2019. Nesta is launching a new pilot project ‘Mapping AI Governance’, an information resource about global governance activities related to artificial intelligence. It features a searchable database, a map and a timeline of AI-related governance activities, such as national strategies, regulations, standardisation initiatives, ethics guidelines, and recommendations from various bodies. We want to use the map as a way of advancing discussions on Artificial Intelligence (AI) governance and regulation, as well as helping researchers, innovators, and policymakers gain a useful tool to better understand what’s happening around the world.
- Structural Disconnects Between Algorithmic Decision-Making and the Law, Humanitarian Law & Policy, Suresh Venkatasubramanian, April 25, 2019. Reflections on an epistemic disconnect between technology (and machine learning-based modeling in particular) and the law. “As our ability to use technology to target and profile people advances, it seems like current legal guidelines are struggling to keep up. But I argue that there are much deeper disconnects between the very way a ‘computer science-centric’ viewpoint looks at the world and human processes, and how the law looks at it. I will focus on two aspects of this disconnect: the tension between process and outcome, and the challenge of vagueness and contestability. And while I’ll draw my examples from our workshop discussions of AI in war zones, the points I make apply quite generally.”
- Discriminating Systems: Gender Race, and Power in AI, West, Whittaker, and Crawford, AI Now, April 2019. “The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation. The histories of ‘race science’ are a grim reminder that race and gender classification based on appearance is scientifically flawed and easily abused. Systems that use physical appearance as a proxy for character or interior states are deeply suspect, including AI tools that claim to detect sexuality from headshots, predict ‘criminality’ based on facial features, or assess worker competence via ‘micro-expressions.’ Such systems are replicating patterns of racial and gender bias in ways that can deepen and justify historical inequality. The commercial deployment of these tools is cause for deep concern.”
- Female-voice AI reinforces bias, say UN report, BBC News, Jane Wakefield, May 21, 2019. AI-powered voice assistants with female voices are perpetuating harmful gender biases, according to a UN study. These female helpers are portrayed as “obliging and eager to please”, reinforcing the idea that women are “subservient”, it finds. Particularly worrying, it says, is how they often give “deflecting, lackluster or apologetic responses” to insults. The report calls for technology firms to stop making voice assistants female by default.
Full report: Gender Divides in AI, UNESCO Digital Library, - The Art of AI, a special edition from Forbes and Intel on some recent trends in AI. Please see the complete digital edition here, as well as noted articles: Abigail Wen & Amir Khosrowshahi: Amazing AI: Four Breakthroughs Everyone Should Know About; Casimir Wierzynski: How To Run AI Research Right—And Why That Matters So Much To Corporate Success. (Print edition coming in June.)
- AI and Ethics
- AI and Human Rights: We Need to Talk about the Use Phase, BSR, May 15, 2019. Blog summarizing updates from BSR based on client interactions and specific use cases, since the release of 3 reports last August setting out the importance of taking a human rights-based approach to the development, deployment, and the use of AI.
- Readings in AI Ethics, Irina Raicu, Markkula Center for Applied Ethics, Santa Clara University. “Amid a maelstrom of articles and academic papers addressing the ethics of artificial intelligence, the following selection of readings aims to highlight some key issues. While it is by no means exhaustive, we hope it will provide a useful starting point for conversations about AI ethics.”
An excellent list of readings and resources on AI ethics; worth tagging for future reference.
- AI and Cars
- Risk of AI Bias in Self-Driving, EET Asia, Micahel Wagner, May 8, 2019. AI has the potential to make us safer, but can it be fair? Because AI is used for more safety-critical applications, it’s easy to see how bias could pose risks. Self-driving cars use AI, not unlike facial recognition, for detecting pedestrians. We don’t want cars to be more likely to get into accidents with people that have longer hair, darker skin, or shorter stature. Safety shouldn’t be contingent on how you look.
- AI and Education
- How to Teach Kids About AI, WSJ, Michelle Ma, May 13, 2019. Today’s middle schoolers may be the first “artificial intelligence natives,” a generation that’s grown up interacting with YouTube’s algorithmor Amazon’s Alexa smart speaker. Educators are grappling with how to teach children to be responsible consumers of the technology. Blakeley H. Payne has one idea. A graduate research assistant at MIT Media Lab who studies the ethics of AI, Ms. Payne designed a curriculum to teach children about concepts like algorithmic bias and deep learning. She tested the week-and-a-half-long program in October with about 225 fifth- through eighth-grade students.
- Schools are Using Software to Help Pick Who Gets In. What Could Go Wrong? FastCompany, May 17, 2019. Algorithms aren’t just helping to orchestrate our digital experiences but increasingly entering sectors that were historically the province of humans—hiring, lending, and flagging suspicious individualsat national borders. Now a growing number of companies, including Salesforce, are selling or building AI-backed systems that schools can use to track potential and current students, much in the way companies keep track of customers. Increasingly, the software is helping admissions officers decide who gets in.
- AI and Biometrics
- Recent Rulings Pull Ill. Biometric Law in Opposite Directions, SecureIDNews, April 17, 2019. “…when it comes to the Illinois Biometric Privacy Act. The two recent decisions — one from a state court, the other from a federal judge — raise questions of what exactly constitutes standing when it comes to biometric data and storage, and the state laws governing those activities.”
- Strict New FL Biometric Information Privacy Act Proposed in Legislature, SecureIDNews, March 2019. In Florida, two state lawmakers — Sen. Gary Farmer and Rep. Bobby DuBose, both Democrats — have proposed a law called the Florida Biometric Information Privacy Act that would, according to one analysis, “establish requirements and restrictions on private entities as to the use, collection, and maintenance of biometric identifiers and biometric information.” The proposed lawcalls for penalties of up to $5,000 for each violation. …the proposed law, which is “strikingly similar” to the Illinois law, according to a legal analysis, would require “private entities in possession of biometric identifiers or biometric information to develop a publicly available written policy establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information.”
International
- OECD Recommendation of the Council on AI: The principles, which aren’t legally binding, call for transparent and responsible disclosure of AI system operations. This will help ensure people “understand when they are engaging with them and can challenge outcomes.” The document urges continuous assessment and management of AI systems to ensure safety throughout program lifetimes. The systems should include appropriate safeguards, “enabling human intervention where necessary — to ensure a fair and just society.” AI system operators should be accountable for the proper functioning in line with the principles. Next month, the organization will release an overview of AI with additional information, and promises practical guidance fleshing out the current report by late this year. The European Commission also supported the Principles
- Ethical Guidelines for Trustworthy AI, EU Commission, April 8, 2019. The High-Level Expert Group on AI presents their ethics guidelines for trustworthy artificial intelligence. This follows the publication of the guidelines’ first draft in December 2018 on which more than 500 comments were received through an open consultation. According to the guidelines, trustworthy AI should be: (1) lawful – respecting all applicable laws and regulations, (2) ethical – respecting ethical principles and values, (3) robust – both from a technical perspective while taking into account its social environment. The guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements: Human Agency and Oversight; Technical Robustness and Safety; Privacy and Data Governance; Transparency; Diversity, Non-Discrimination, and Fairness; Societal and Environmental Well-being; and Accountability.
- Automated Decision Making: the Role of Meaningful Human Reviews, AI Auditing Framework Blog, April 2019. The meaningfulness of human review in non-solely automated AI applications, and the management of the risks associated with it, are key areas of focus for our proposed AI Auditing Framework and what we will be exploring further in this blog. What’s already been said? Both the ICOand the European Data Protection Board (EDPB) have already published guidance relating to these issues. The key messages are:
- Human reviewers must be involved in checking the system’s recommendation and should not “routinely” apply the automated recommendation to an individual;
- reviewers’ involvement must be active and not just a token gesture. They should have actual “meaningful” influence on the decision, including the “authority and competence” to go against the recommendation; and
- reviewers must ‘weigh-up’ and ‘interpret’ the recommendation, consider all available input data, and also take into account other additional factors’.
Research, Reports, and Books
- Empathy, Democracy, and the Rule of Law (review of paper: Kiel Brennan-Marquez & Stephen E. Henderson, Artificial Intelligence and Role-Reversible Judgment, __ of Crim. L. and Criminology__ (forthcoming), available at SSRN, by Frank Pasquale, Technology Law JOTWELL, May 8, 2019. Are some types of robotic judging so troubling that they simply should not occur? In Artificial Intelligence and Role-Reversible Judgment, Kiel Brennan-Marquez and Stephen E. Henderson say yes, confronting an increasingly urgent question. They illuminate dangers inherent in the automation of judgment, rooting their analysis in a deep understanding of classic jurisprudence on the rule of law.
- AI Ethics – Too Principled to Fail? Brent Mittelstadt, University of Oxford, Oxford Internet Institute, May 20, 2019. AI Ethics is now a global topic of discussion in academic and policy circles. At least 63 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.
Bonus Round –
- Machines Like Me, (fiction) Ian McEwan, Machines Like Metakes place in an alternative 1980s London. Charlie, drifting through life and dodging full-time employment, is in love with Miranda, a bright student who lives with a terrible secret. When Charlie comes into money, he buys Adam, one of the first synthetic humans and—with Miranda’s help—he designs Adam’s personality. The near-perfect human that emerges is beautiful, strong, and clever. It isn’t long before a love triangle soon forms, and these three beings confront a profound moral dilemma. In his subversive new novel, Ian McEwan asks whether a machine can understand the human heart—or whether we are the ones who lack understanding.
Interview with the author. - Program This Flying Robot to Follow You Everywhere, Mashable (Twitter), May 9, 2019. “Meet Fleye, an autonomous flying drone that is designed with users in mind.”