The House’s SELF DRIVE Act Races Ahead on Privacy

In a rare moment of bipartisanship, the House Energy and Commerce Committee yesterday unanimously approved the SELF DRIVE Act H.R. 3388, sending it to the full House of Representatives for consideration. The bill facilitates introduction and testing of autonomous cars by clarifying federal and state roles, and by granting exemptions from motor vehicle standards that have impeded introduction of new automated vehicle technologies. This vote was an important step forward in enabling introduction of new technologies that have the potential to transform the future of mobility and maximize consumer safety.

The latest version of the bill includes a significant section on consumer privacy, which primarily requires that manufacturers create a written “privacy plan” for every automated vehicle. This privacy plan must explain a manufacturer’s collection, use, sharing, and storage of information about vehicle owners and occupants, and detail manufacturers’ approaches to core privacy principles like data minimization, de-identification, and information retention. Carmaker practices for information that is de-identified, anonymized, or encrypted do not need to be detailed in the privacy plan.

The House’s support for these provisions underscores the growing role that data will play in connected vehicles, and the importance of responsible data practices for this emerging field.

Automakers have proactively tackled this issue, with nearly all automakers developing and committing to the Automotive Privacy Principles in 2014. The Principles, which are enforceable by the Federal Trade Commission, require transparency, affirmative consent for sharing of sensitive data for marketing purposes, and limited sharing of covered information with law enforcement. Many of the provisions in the Hill bill are reflect similar commitments made in these Principles. Moreover, NHTSA’s Federal Automated Vehicles Policy recommends that entities produce a Safety Assessment Letter (SAL) before they introduce new technologies. The SAL, which becomes a legal requirement in the latest version of the House bill, already includes a provision that companies in the ecosystem outline their privacy practices, which ensure basic consumer privacy protection.

The bill also underscores the Federal Trade Commission’s role regarding connected vehicles. While the FTC has authority to bring enforcement actions against unfair or deceptive privacy and data practices across sectors including transportation, the bill highlights the agency’s ability to enforce violations of the privacy-related sections of the bill, and calls on the FTC to study manufacturer privacy plans and practices. The FTC is actively collaborating with NHTSA on this topic, co-hosting a workshop on privacy and security issues related to connected and automated vehicles in June, where the agencies committed to minimize duplication while ensuring consumer protection around privacy and cybersecurity in connected cars.

The bill also calls for creation of a Highly Automated Vehicle Advisory Council that will monitor and provide advice to NHTSA on several issues protection of consumer privacy and security. This Council will have the flexibility to monitor this space and recommend best practices going forward.

The House bill provides flexibility for manufacturers to determine best practices in a nascent industry, where data is only beginning to play a part. The exact data that will need to be generated, stored, and shared to facilitate self-driving cars is not yet known, even by industry experts, and a bill that requires a plan but provides flexibility on exact treatment of such data is a promising step.

The Committee’s work on the SELF DRIVE act has been a successful bipartisan effort and seems likely to advance with continued support after House recess. A Senate bill on self-driving cars is expected shortly, and FPF will stay tuned to see if privacy provisions are included.

Additional Resources

See FPF’s consumer guide to the connected car here

See FPF’s infographic mapping data flows in the connected car here

See FPF’s comments on the Federal Automated Vehicles Policy here

See FPF’s comments on the FTC/NHTSA Workshop here

Privacy Protective Research: Facilitating Ethically Responsible Access to Administrative Data

This paper provides strategies for organizations to minimize risks of re-identification and privacy violations for individual data subjects. In addition, it suggests privacy and ethical concerns can be most effectively managed by supporting the development of administrative data centers. These institutions will serve as centers of expertise for de-identification, certify researchers, provide state-of-the-art data security, organize ethical review boards and support best practices for cleansing and managing data sets.

Privacy and Confidentiality; 2) The Interests of Data Producers; 3) Gaining Access: The Lessons of Experience; and 4) Lessons learned from infrastructure successes in other contexts.

The Top 10: Student Privacy News (June – July 2017)

The Future of Privacy Forum tracks student privacy news very closely, and shares relevant news stories with our newsletter subscribers.* Approximately every month, we post “The Top 10,” a blog with our top student privacy stories. 

The Top 10

  1. FERPA|Sherpa continues to grow! FPF published new blogs on protecting your child’s privacy when they go to summer camp (Leah Plunkett from the Berkman Klein Center) and Higher Ed Chief Privacy Officers(Joanna Grama from EDUCAUSE). We have also continued to add new resources to the Resource Search Center. Check out the site!
  2. Carnegie Mellon University grad students released a study on ed tech start-ups and student privacy, finding that they often fail to “prioritize student data protections,” and that investors do not tend to discuss privacy with their investees (the only exceptions I know about are AT&T Aspire and the Michelson 20MM Foundation). The release of the study was widely covered in the press.
  3. The House Subcommittee on Early Childhood, Elementary, and Secondary Education held the hearing “Exploring Opportunities to Strengthen Education Research While Protecting Student Privacy” on June 28th. The consensus: “states need federal guidance on student data privacy,” and “It’s Time” to update FERPA. As mentioned in the previous newsletter, a very similar hearing was held on March 22nd last year, which is probably why very few lawmakers were in attendance. You can read my live tweets from the hearing, and check out my op-ed on this topic from last year.
  4. The Louisiana governor vetoed a bill that would have allowed researchers outside of Louisiana to access student data for research, subject to civil penalties for any violation of student privacy (more about the problem the bill was addressing here). The Louisiana student privacy law is still one of the strictest laws in the country even after being rolled back a year after it passed due to many unintended consequences.
  5. The U.S. Department of Education’s Regulatory Reform Task Force issued a progress report with a list of regulations that need to be updated – including FERPA and PPRA regulations (more info on the task force report via EdWeek) (h/t Doug Levin).
  6. Elana Zeide’s article, “The Structural Consequences of Big Data-Driven Education,” was published in the journal Big Data.
  7. John Warner writes a really interesting article in Inside Higher Ed about “Algorithmic Assessment vs. Critical Reflection.” One particularly thought-inspiring quote: “I am disconcerted by an educational model where students primarily receive attention when they’re “struggling.” This suggests a framework where the goal of education is simply to stay off the algorithm’s radar, rather than maximize each student’s potential.”
  8. In Australia, “An algorithm is using government data to tell those from low socioeconomic backgrounds their likelihood of completing university, but privacy experts say it could be utilised for early intervention instead of discouragement.”
  9. When should schools be able to access student social media? TrustED posted an article about the issue, and EdWeek reported on “10 Social Media Controversies That Landed Students in Trouble This School Year.” A student “tried to expose a schoolmate’s racism by reposting” her remarks on social media and was disciplined by the school, and the ACLU of Ohio is pushing back. A new paper published this month found that “women and young people are more likely to experience the chilling effects of surveillance,” and “the younger the participant, the greater the chilling effect.” For a look at surveillance and student privacy, check out my report from last fall.
  10. Personalized Learning articles proliferated this month in response to a RAND report on personalized learning implementation. Ben Herold at EdWeek reported that “Chan-Zuckerberg to Push Ambitious New Vision for Personalized Learning;” the New Schools Venture Fund Summit emphasized that “philanthropists and school leaders need to make a ‘big bet’ on dramatically reshaping schools” through personalized learning; Common Sense Media’s Bill Fitzgerald was on a podcast about “Personalized Learning and the Disruption of Public Education;” and there were other think pieces on personalized learning in RealClearEducationThe EconomistEdTech Strategies, and the Christensen Institute. It may be worth revisiting the Data & Society paper on “Personalized Learning: The Conversations We’re Not Having” from last year and its discussion of some of the privacy implications of personalized learning.

Image: “image_019” by Brad Flickinger  is licensed under CC BY 2.0.

The Future of Digital Privacy

Jules Polonetksy, Future of Privacy Forum’s CEO, was featured on Episode 5 of The Front Row, a podcast by 2U. The conversation centered around responsible data collection and the future of digital privacy. Jules discussed how chief privacy officers and cyber security experts will be able to harness the good in technology and mitigate the risks. He explained:

“If they are empowered to shape responsible decisions, we’ll help make sure that we have a world that is not Orwellian but that uses technology so that we have better health, more free time, more time to do important things like spend it with our family and be healthy and achieve great things.”

LISTEN

Read Transcript

Privacy in the age of data: Regulation for human rights and the economy

The Friends of Europe recently released a discussion paper, ‘Policy choices for a digital age – taking a whole economy, whole society approach’ at the closing plenary of the European Commission co-organised Net Futures 2017 conference in Brussels.

Jules Polonetsky, Future of Privacy Forum’s CEO, contributed an article titled, ‘Privacy in the age of data: Regulation for human rights and the economy.’ His article examines how companies can enhance trust in the digital economy while also strengthening the deep mutual values that citizens and consumers so cherish in both Europe and the US.

READ PAPER

Meet FPF's 2017 Summer Interns!

Pictured Above: FPF Interns during a visit to Google’s Washington, D.C. offices.

We are pleased to introduce FPF’s 2017 Summer Interns. FPF interns work with policy staff on a range of substantive projects.  They perform research and craft analysis regarding the intersection of privacy and emerging technologies, including: connected cars, the Internet of Things, education technologies, smart communities, de-identification, advertising technology, biometrics, and genetic analysis.  FPF interns meet with influential policymakers, industry leaders, academics, and privacy advocates. They provide crucial support for FPF projects and stakeholder engagement.  We also like to think they have a bit of fun.

Please click below to meet our interns!

Intern Profiles

Privacy Scholarship Research Reporter: Issue 2, July 2017 – Artificial Intelligence and Machine Learning: The Privacy Challenge

Notes from FPF

Building on our first issue, which discussed the various privacy challenges related to algorithmic accountability, Future of Privacy Forum’s Privacy Scholarship Reporter now turns its focus to thoughtful, academic considerations of the privacy challenges, and ethical data use considerations, of AI and Machine Learning.

Robot Hands

Artificial Intelligence is perhaps easier to intuitively grasp than to explicitly define – a truth that embodies the very challenge of trying to design machines that reflect what it’s like to be human. With every new technology, there is the question, “what ‘new’ privacy challenge does this platform, service, or capability pose?” Are there new privacy challenges in AI? Or perhaps there are just the same questions about consent, transparency, use, and control, but in new contexts and products? If there are new aspects – can the existing policy framework address them sufficiently? In AI, we may find that there are indeed challenges that expand beyond simply greater scope and scale, and that push us to define new tools with which to address them.

What we do know is that we cannot leave AI or Machine Learning in a black box. While retail recommendations for “people who bought this also bought that” seem clear and reasonable, what do we understand about stock-picking models that underlie our economy? A language translation program may feel straightforward, but what about the selection of news or travel options or job offers that are tied to your multi-language capabilities, or one’s demonstrated interest in – or distaste for – other cultures?

Machines are learning to read our emotions, interpret body language, and predict our comfort-seeking behavior. Are we building bigger and more impenetrable bubbles that will limit or divide us? Or are we creating more extended complex worlds that allow us to know and understand more about the world around us? How can we tell, and how can we understand and control our own data “selves” in the process? These are areas that deserve focused attention, and the scholarship addressing them is only just beginning.

In this issue are articles that provide an excellent basis and introduction to Machine Learning and Artificial Intelligence. They include publications that: propose methods that might combat potential discrimination and bias in predictive modeling based on AI; question whether existing regulatory and legal norms are sufficient for the challenges of AI (including privacy) or whether new frameworks may be desirable; ask how AI will impact individual rights, including 4th Amendment questions; delve into the tricky questions of ethics regarding the widespread use of AI systems in the general population; and explore what the presence of robots in our homes will do to our understanding of privacy.

Are there important scholarship missing from our list? Send your comments or feedback to [email protected]. We look forward to hearing from you.

Brenda Leong, Senior Counsel and Director of Strategy, FPF


Big Data, Artificial Intelligence, Machine Learning and Data Protection

THE UNITED KINGDOM INFORMATION COMMISSIONER’S OFFICE

This discussion paper looks at the implications of big data, artificial intelligence (AI) and machine learning for data protection, and explains the ICO’s views on these. It defines big data, AI and machine learning, and identifies the particular characteristics that differentiate them from more traditional forms of data processing. Realizing the benefits that can flow from big data analytics, the paper analyzes the main implications for data protection. It examines some of the tools and approaches that can help organizations ensure that their big data processing complies with data protection requirements. Also discussed is how data protection, as enacted in current legislation, will not work for big data analytics, and how the role of accountability in relation to the more traditional principle of transparency will increase. The main conclusions are that, while data protection can be challenging in a big data context, the benefits will not be achieved at the expense of data privacy rights; and meeting data protection requirements will benefit both organizations and individuals. After the conclusions six key recommendations for organizations using big data analytics are presented.

“Big Data, Artificial Intelligence, Machine Learning and Data Protection” by The United Kingdom Information Commissioner’s Office March 2017.

 

Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI

S. BIRD, S. BAROCAS, K. CRAWFORD, F. DIAZ, H. WALLACH

This paper points out that while computer science has been performing large-scale experimentation for a long time, advances in artificial intelligence, novel autonomous systems for experimentation are raising complex, unanswered questions for the field. Some of these questions are computational, while others relate to the social and ethical implications of these systems. In this paper, the authors identify several questions about the social and ethical implications of autonomous experimentation systems. These questions concern the design of such systems, their effects on users, and their resistance to some common mitigations.

Authors’ Abstract

In the field of computer science, large-scale experimentation on users is not new. However, driven by advances in artificial intelligence, novel autonomous systems for experimentation are emerging that raise complex, unanswered questions for the field. Some of these questions are computational, while others relate to the social and ethical implications of these systems. We see these normative questions as urgent because they pertain to critical infrastructure upon which large populations depend, such as transportation and healthcare. Although experimentation on widely used online platforms like Facebook has stoked controversy in recent years, the unique risks posed by autonomous experimentation have not received sufficient attention, even though such techniques are being trialled on a massive scale. In this paper, we identify several questions about the social and ethical implications of autonomous experimentation systems. These questions concern the design of such systems, their effects on users, and their resistance to some common mitigations.

“Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI” by S. Bird, S. Barocas, K. Crawford, F. Diaz, H. Wallach Microsoft Research New York City, Workshop on Fairness, Accountability, and Transparency in Machine Learning.

 

Averting Robot Eyes

M.E. KAMINSKI, M. REUBEN, C. GRIMM, W.D. SMART

The authors argue that home robots will inevitably cause privacy harms while acknowledging that robots can provide beneficial services — as long as consumers trust them. This paper evaluates potential technological solutions that could help home robots keep their promises, avert their “eyes”, and otherwise mitigate privacy harms. The goal of the study is to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. Five principles for home robots and privacy design are proposed: data minimization, purpose specification, use limitation, honest anthropomorphism, and dynamic feedback and participation. Current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie, is also discussed.

Authors’ Abstract

Home robots will cause privacy harms. At the same time, they can provide beneficial services — as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms.

We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology.

“Averting Robot Eyes” by M.E. Kaminski, M. Reuben, C. Grimm, W.D. Smart Maryland Law Review, Vol. 76, p. 983, 2017.

 

Ethically Aligned Design

THE IEEE GLOBAL INITIATIVE FOR ETHICAL CONSIDERATIONS IN ARTIFICIAL INTELLIGENCE AND AUTONOMOUS SYSTEMS

The IEEE Global Initiative provides the opportunity to bring together multiple voices in the Artificial Intelligence and Autonomous Systems communities to identify and find consensus on timely issues. This document’s purpose is to advance a public discussion of how these intelligent and autonomous technologies can be aligned to moral values and ethical principles that prioritize human well being. It includes eight sections, each addressing a specific topic related to AI/AS that has been discussed at length by a specific committee of The IEEE Global Initiative. Issues and candidate recommendations pertaining to these topics are listed in each committee section. The eight sections include; General Priciples, Embedding Values in Autonomous Intelligence Systems, Methodologies to Guide Ethical Research and Design, Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI), Personal Data and Individual Access Control, Reframing Autonomous Weapons Systems, Economics/Humanitarian Issues, and Law.

“Ethically Aligned Design” by The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, December 2016.

 

Law and Regulation of Artificial Intelligence and Robots: Conseptual Framework and Normative Implications

N. PETIT

In light of the many challenges that affect attempts to devise law and regulation in a context of technological inception, this paper seeks to offer a methodology geared to the specific fields of AIs and robots. This paper addresses the following normative question: should a social planner adopt specific rules and institutions for AIs and robots or should the resolution of issues be left to Hume’s three “fundamental laws of nature”, namely ordinary rules on property and liability, contract laws and the court system? The four sections review the main regulatory approaches proposed in the existing AI and robotic literature, discusses identifiable regulatory trade-offs, that is the threats and opportunities created by the introduction of regulation in relation to AIs and robotic applications, specific areas of liability as a case-study and presents a possible methodology for the law and regulation of AIs and robots.

Authors’ Abstract

Law and regulation of Artificial Intelligence (“AI”) and robots is emerging, fuelled by the introduction of industrial and commercial applications in society. A common thread to many regulatory initiatives is to occur without a clear or explicit methodological framework. In light of the many challenges that affect attempts to devise law and regulation in a context of technological incipiency, this paper seeks to offer a methodology geared to the specific fields of AIs and robots. At bottom, the paper addresses the following normative question: should a social planer adopt specific rules and institutions for AIs and robots or should the resolution of issues be left to Hume’s three “fundamental laws of nature”, namely ordinary rules on property and liability, contract laws and the courts system? To explore that question, the analysis is conducted under a public interest framework.

Section 1 reviews the main regulatory approaches proposed in the existing AI and robotic literature, and stresses their advantages and disadvantages. Section 2 discusses identifiable regulatory trade-offs, that is the threats and opportunities created by the introduction of regulation in relation to AIs and robotic applications. Section 3 focuses on the specific area of liability as a case-study. Finally, Section 4 proposes a possible methodology for the law and regulation of AIs and robots. In conclusion, the paper proposes to index the regulatory response upon the nature of the externality – positive or negative – created by an AI application, and to distinguish between discrete, systemic and existential externalities.

“Law and Regulation of Artificial Intelligence and Robots: Conceptual Framework and Normative Implications” by N. Petit University of Liege – School of Law; International Center for Law & Economics (ICLE), March 9, 2017.

 

Machine Learning: The Power and Promise of Computers That Learn by Example

THE ROYAL SOCIETY

This report by The Royal Society provides an excellent overview of machine learning, it’s potential and impact on society. Through this initiative, they sought to investigate the potential of machine learning over the next 5-10 years, and the barriers to realizing that potential. In doing so, the project engaged with key audiences — in policy, industry, academia and the public — to raise awareness of machine learning, understand views held by the public and contribute to the public debate about Machine Learning and identify the key social, ethical, scientific and technical issues it presents. Chapters five and six discuss the societal impact of Machine Learning which looks more closely at the privacy-focused challenges these technologies create both ethically and technologically.

“Machine Learning: The Power and Promise of Computers That Learn by Example” by The Royal Society The Royal Society, April 2017.

 

Rethinking the Fourth Amendment in the Age of Supercomputers, Artificial Intelligence, and Robots

M. REID

This paper posits that it is not farfetched to think lawenforcement’s use of cognitive computing will extend to using thinking, real-time robots in the field in the not-so-distant future. IBM’s Watson currently uses its artificial intelligence to suggest medical diagnoses and treatment in the healthcare industry and assists the finance industry in improving investment decisions. In this article, the author explores the consequences of predictive and content analytics and the future of cognitive computing, such as utilizing “robots” such as an imaginary “Officer Joe Roboto” in the law enforcement context. Would our interactions with Officer Joe Roboto trigger the same Fourth Amendment concerns and protections as those when dealing with a flesh-and-blood police officer? Are we more afraid of a “robotic” Watson, its capabilities, and lack of feeling and biases, compared to a human law enforcement officer? This article attempts to explore the ramifications of using such computers/robots in the future.

Authors’ Abstract

Law enforcement currently uses cognitive computers to conduct predictive and content analytics andmanage information contained in large police data files. These big data analytics and insight capabilities are more effective than using traditional investigative tools and save law enforcement time and a significant amount of financial and personnel resources. It is not farfetched to think lawenforcement’s use of cognitive computing will extend to using thinking, real-time robots in the field in the not-so-distant future. IBM’s Watson currently uses its artificial intelligence to suggest medical diagnoses and treatment in the healthcare industry and assists the finance industry in improving investment decisions. IBM and similar companies already offer predictive analytics and cognitive computing programs to law enforcement for real-time intelligence andinvestigative purposes. This article will explore the consequences of predictive and content analytics and the future of cognitive computing, such as utilizing “robots” such as an imaginary “Officer Joe Roboto” in the law enforcement context. Would our interactions with Officer Joe Roboto trigger the same Fourth Amendment concerns andprotections as those when dealing with a flesh-and-blood police officer? Are we more afraid of a “robotic” Watson, its capabilities, and lack of feeling and biases, compared to a human law enforcement officer? Assuming someday in the future we might be able to solve the physical limitations of a robot, would a “robotic” officer be preferable to a human one? What sort of limitations would we place on such technology? This article attempts to explore the ramifications of using such computers/robots in the future. Autonomous robots with artificial intelligence and the widespread use ofpredictive analytics are the future tools of law enforcement in a digital age, and we must come up with solutions as to how to handle the appropriate use of these tools.

“Rethinking the Fourth Amendment in the Age of Supercomputers, Artificial Intelligence, and Robots” by M. Reid West Virginia Law Review, Vol. 119, No. 101, 2017.

 

Equality of Opportunity in Supervised Learning

M. HARDT, E. PRICE, N. SOBRERO

The authors of this paper use a case study of FICO credit scores to illustrate their notion that classification accuracy in supervised learning depends only on the joint statistics of the predictor and the protected attribute but not the interpretation of the individual features of the data. The study looks at the inherent limits of defining and identifying biases based on this notion and propose a criterion for producing discrimination against a specific sensitive attribute. They argue that one can optimally adjust any learned predictor to remove discrimination, according to their definition.

Authors’ Abstract

We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individual features. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.

“Equality of Opportunity in Supervised Learning” by M. Hardt, E. Price, N. Sobrero Cornell University Library, [v1] 7 Oct 2016.