Privacy Scholarship Research Reporter: Issue 2, July 2017 – Artificial Intelligence and Machine Learning: The Privacy Challenge
Notes from FPF
Building on our first issue, which discussed the various privacy challenges related to algorithmic accountability, Future of Privacy Forum’s Privacy Scholarship Reporter now turns its focus to thoughtful, academic considerations of the privacy challenges, and ethical data use considerations, of AI and Machine Learning.
Artificial Intelligence is perhaps easier to intuitively grasp than to explicitly define – a truth that embodies the very challenge of trying to design machines that reflect what it’s like to be human. With every new technology, there is the question, “what ‘new’ privacy challenge does this platform, service, or capability pose?” Are there new privacy challenges in AI? Or perhaps there are just the same questions about consent, transparency, use, and control, but in new contexts and products? If there are new aspects – can the existing policy framework address them sufficiently? In AI, we may find that there are indeed challenges that expand beyond simply greater scope and scale, and that push us to define new tools with which to address them.
What we do know is that we cannot leave AI or Machine Learning in a black box. While retail recommendations for “people who bought this also bought that” seem clear and reasonable, what do we understand about stock-picking models that underlie our economy? A language translation program may feel straightforward, but what about the selection of news or travel options or job offers that are tied to your multi-language capabilities, or one’s demonstrated interest in – or distaste for – other cultures?
Machines are learning to read our emotions, interpret body language, and predict our comfort-seeking behavior. Are we building bigger and more impenetrable bubbles that will limit or divide us? Or are we creating more extended complex worlds that allow us to know and understand more about the world around us? How can we tell, and how can we understand and control our own data “selves” in the process? These are areas that deserve focused attention, and the scholarship addressing them is only just beginning.
In this issue are articles that provide an excellent basis and introduction to Machine Learning and Artificial Intelligence. They include publications that: propose methods that might combat potential discrimination and bias in predictive modeling based on AI; question whether existing regulatory and legal norms are sufficient for the challenges of AI (including privacy) or whether new frameworks may be desirable; ask how AI will impact individual rights, including 4th Amendment questions; delve into the tricky questions of ethics regarding the widespread use of AI systems in the general population; and explore what the presence of robots in our homes will do to our understanding of privacy.
Are there important scholarship missing from our list? Send your comments or feedback to [email protected]. We look forward to hearing from you.
Brenda Leong, Senior Counsel and Director of Strategy, FPF
Big Data, Artificial Intelligence, Machine Learning and Data Protection
THE UNITED KINGDOM INFORMATION COMMISSIONER’S OFFICE
This discussion paper looks at the implications of big data, artificial intelligence (AI) and machine learning for data protection, and explains the ICO’s views on these. It defines big data, AI and machine learning, and identifies the particular characteristics that differentiate them from more traditional forms of data processing. Realizing the benefits that can flow from big data analytics, the paper analyzes the main implications for data protection. It examines some of the tools and approaches that can help organizations ensure that their big data processing complies with data protection requirements. Also discussed is how data protection, as enacted in current legislation, will not work for big data analytics, and how the role of accountability in relation to the more traditional principle of transparency will increase. The main conclusions are that, while data protection can be challenging in a big data context, the benefits will not be achieved at the expense of data privacy rights; and meeting data protection requirements will benefit both organizations and individuals. After the conclusions six key recommendations for organizations using big data analytics are presented.
Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI
S. BIRD, S. BAROCAS, K. CRAWFORD, F. DIAZ, H. WALLACH
This paper points out that while computer science has been performing large-scale experimentation for a long time, advances in artificial intelligence, novel autonomous systems for experimentation are raising complex, unanswered questions for the field. Some of these questions are computational, while others relate to the social and ethical implications of these systems. In this paper, the authors identify several questions about the social and ethical implications of autonomous experimentation systems. These questions concern the design of such systems, their effects on users, and their resistance to some common mitigations.
Authors’ Abstract
In the field of computer science, large-scale experimentation on users is not new. However, driven by advances in artificial intelligence, novel autonomous systems for experimentation are emerging that raise complex, unanswered questions for the field. Some of these questions are computational, while others relate to the social and ethical implications of these systems. We see these normative questions as urgent because they pertain to critical infrastructure upon which large populations depend, such as transportation and healthcare. Although experimentation on widely used online platforms like Facebook has stoked controversy in recent years, the unique risks posed by autonomous experimentation have not received sufficient attention, even though such techniques are being trialled on a massive scale. In this paper, we identify several questions about the social and ethical implications of autonomous experimentation systems. These questions concern the design of such systems, their effects on users, and their resistance to some common mitigations.
The authors argue that home robots will inevitably cause privacy harms while acknowledging that robots can provide beneficial services — as long as consumers trust them. This paper evaluates potential technological solutions that could help home robots keep their promises, avert their “eyes”, and otherwise mitigate privacy harms. The goal of the study is to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. Five principles for home robots and privacy design are proposed: data minimization, purpose specification, use limitation, honest anthropomorphism, and dynamic feedback and participation. Current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie, is also discussed.
Authors’ Abstract
Home robots will cause privacy harms. At the same time, they can provide beneficial services — as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms.
We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology.
“Averting Robot Eyes” by M.E. Kaminski, M. Reuben, C. Grimm, W.D. Smart Maryland Law Review, Vol. 76, p. 983, 2017.
Ethically Aligned Design
THE IEEE GLOBAL INITIATIVE FOR ETHICAL CONSIDERATIONS IN ARTIFICIAL INTELLIGENCE AND AUTONOMOUS SYSTEMS
The IEEE Global Initiative provides the opportunity to bring together multiple voices in the Artificial Intelligence and Autonomous Systems communities to identify and find consensus on timely issues. This document’s purpose is to advance a public discussion of how these intelligent and autonomous technologies can be aligned to moral values and ethical principles that prioritize human well being. It includes eight sections, each addressing a specific topic related to AI/AS that has been discussed at length by a specific committee of The IEEE Global Initiative. Issues and candidate recommendations pertaining to these topics are listed in each committee section. The eight sections include; General Priciples, Embedding Values in Autonomous Intelligence Systems, Methodologies to Guide Ethical Research and Design, Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI), Personal Data and Individual Access Control, Reframing Autonomous Weapons Systems, Economics/Humanitarian Issues, and Law.
“Ethically Aligned Design” by The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, December 2016.
Law and Regulation of Artificial Intelligence and Robots: Conseptual Framework and Normative Implications
N. PETIT
In light of the many challenges that affect attempts to devise law and regulation in a context of technological inception, this paper seeks to offer a methodology geared to the specific fields of AIs and robots. This paper addresses the following normative question: should a social planner adopt specific rules and institutions for AIs and robots or should the resolution of issues be left to Hume’s three “fundamental laws of nature”, namely ordinary rules on property and liability, contract laws and the court system? The four sections review the main regulatory approaches proposed in the existing AI and robotic literature, discusses identifiable regulatory trade-offs, that is the threats and opportunities created by the introduction of regulation in relation to AIs and robotic applications, specific areas of liability as a case-study and presents a possible methodology for the law and regulation of AIs and robots.
Authors’ Abstract
Law and regulation of Artificial Intelligence (“AI”) and robots is emerging, fuelled by the introduction of industrial and commercial applications in society. A common thread to many regulatory initiatives is to occur without a clear or explicit methodological framework. In light of the many challenges that affect attempts to devise law and regulation in a context of technological incipiency, this paper seeks to offer a methodology geared to the specific fields of AIs and robots. At bottom, the paper addresses the following normative question: should a social planer adopt specific rules and institutions for AIs and robots or should the resolution of issues be left to Hume’s three “fundamental laws of nature”, namely ordinary rules on property and liability, contract laws and the courts system? To explore that question, the analysis is conducted under a public interest framework.
Section 1 reviews the main regulatory approaches proposed in the existing AI and robotic literature, and stresses their advantages and disadvantages. Section 2 discusses identifiable regulatory trade-offs, that is the threats and opportunities created by the introduction of regulation in relation to AIs and robotic applications. Section 3 focuses on the specific area of liability as a case-study. Finally, Section 4 proposes a possible methodology for the law and regulation of AIs and robots. In conclusion, the paper proposes to index the regulatory response upon the nature of the externality – positive or negative – created by an AI application, and to distinguish between discrete, systemic and existential externalities.
Machine Learning: The Power and Promise of Computers That Learn by Example
THE ROYAL SOCIETY
This report by The Royal Society provides an excellent overview of machine learning, it’s potential and impact on society. Through this initiative, they sought to investigate the potential of machine learning over the next 5-10 years, and the barriers to realizing that potential. In doing so, the project engaged with key audiences — in policy, industry, academia and the public — to raise awareness of machine learning, understand views held by the public and contribute to the public debate about Machine Learning and identify the key social, ethical, scientific and technical issues it presents. Chapters five and six discuss the societal impact of Machine Learning which looks more closely at the privacy-focused challenges these technologies create both ethically and technologically.
Rethinking the Fourth Amendment in the Age of Supercomputers, Artificial Intelligence, and Robots
M. REID
This paper posits that it is not farfetched to think lawenforcement’s use of cognitive computing will extend to using thinking, real-time robots in the field in the not-so-distant future. IBM’s Watson currently uses its artificial intelligence to suggest medical diagnoses and treatment in the healthcare industry and assists the finance industry in improving investment decisions. In this article, the author explores the consequences of predictive and content analytics and the future of cognitive computing, such as utilizing “robots” such as an imaginary “Officer Joe Roboto” in the law enforcement context. Would our interactions with Officer Joe Roboto trigger the same Fourth Amendment concerns and protections as those when dealing with a flesh-and-blood police officer? Are we more afraid of a “robotic” Watson, its capabilities, and lack of feeling and biases, compared to a human law enforcement officer? This article attempts to explore the ramifications of using such computers/robots in the future.
Authors’ Abstract
Law enforcement currently uses cognitive computers to conduct predictive and content analytics andmanage information contained in large police data files. These big data analytics and insight capabilities are more effective than using traditional investigative tools and save law enforcement time and a significant amount of financial and personnel resources. It is not farfetched to think lawenforcement’s use of cognitive computing will extend to using thinking, real-time robots in the field in the not-so-distant future. IBM’s Watson currently uses its artificial intelligence to suggest medical diagnoses and treatment in the healthcare industry and assists the finance industry in improving investment decisions. IBM and similar companies already offer predictive analytics and cognitive computing programs to law enforcement for real-time intelligenceandinvestigative purposes. This article will explore the consequences of predictive and content analytics and the future of cognitive computing, such as utilizing “robots” such as an imaginary “Officer Joe Roboto” in the law enforcement context. Would our interactions with Officer Joe Roboto trigger the same Fourth Amendment concerns andprotections as those when dealing with a flesh-and-blood police officer? Are we more afraid of a “robotic” Watson, its capabilities, and lack of feeling and biases, compared to a human law enforcement officer? Assuming someday in the future we might be able to solve the physical limitations of a robot, would a “robotic” officer be preferable to a human one? What sort of limitations would we place on such technology? This article attempts to explore the ramifications of using such computers/robots in the future. Autonomous robots with artificial intelligenceand the widespread use ofpredictive analytics are the future tools oflaw enforcement in a digital age, and we must come up with solutions as to how to handle the appropriate use of these tools.
The authors of this paper use a case study of FICO credit scores to illustrate their notion that classification accuracy in supervised learning depends only on the joint statistics of the predictor and the protected attribute but not the interpretation of the individual features of the data. The study looks at the inherent limits of defining and identifying biases based on this notion and propose a criterion for producing discrimination against a specific sensitive attribute. They argue that one can optimally adjust any learned predictor to remove discrimination, according to their definition.
Authors’ Abstract
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individual features. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
Infographic: Data and the Connected Car – Version 1.0
On June 27, 2017, the Future of Privacy Forum released an infographic, “Data and the Connected Car – Version 1.0,” describing the basic data-generating devices and flows in today’s connected vehicles. The infographic will help consumers and businesses alike understand the emerging data ecosystems that power incredible new features—features that can warn drivers of an accident before they see it, or jolt them awake if they fall asleep at the wheel.
Many of these new features are enabled by the collection of new types of data, putting the topic of privacy in connected cars on the agenda of industry, policymakers, and regulators. On June 28, 2017, the Federal Trade Commission and the National Highway Traffic Safety Administration hosted a workshop on the privacy and security issues around automated and connected vehicles. This was the first workshop co-hosted by the two agencies, and their partnership is a recognition of the convergence of the automotive and technology sectors. Lauren Smith, FPF’s Connected Car Policy Counsel, spoke on a panel about cybersecurity and data. Watch a clip below.
“The benefits of connected vehicle technologies are crucial to addressing the 94% of car accidents that are caused by human error,” said Smith. “But we need to foster transparency and communication around consumer data use in order to deploy them responsibly. Conversations between lawmakers, consumers, and businesses such as those happening tomorrow need to go beyond the current day and focus on building trustworthy data practices—and communicating them—as vehicles advance. We think that explaining cars’ data-transmitting devices and flows is an important first step.”
This infographic accompanies a project FPF launched earlier this year, a first-of-its kind consumer guide to Personal Data in Your Car. The Guide includes tips to help consumers understand the new technologies powered by data inside the car.
Future of Privacy Forum Releases Infographic Mapping Data and the Connected Car
in Advance of FTC & NHTSA Workshop
Washington, DC – Today, the Future of Privacy Forum released an infographic, “Data and the Connected Car – Version 1.0,” describing the basic data-generating devices and flows in today’s connected vehicles. The infographic will help consumers and businesses alike understand the emerging data ecosystems that power incredible new features—features that can warn drivers of an accident before they see it, or jolt them awake if they fall asleep at the wheel.
Many of these new features are enabled by the collection of new types of data, putting the topic of privacy in connected cars on the agenda of industry, policymakers, and regulators. Tomorrow, Wednesday, June 28, the Federal Trade Commission and the National Highway Traffic Safety Administration will host a workshop on the privacy and security issues around automated and connected vehicles. This is the first workshop co-hosted by the two agencies, and their partnership is a recognition of the convergence of the automotive and technology sectors.
“The benefits of connected vehicle technologies are crucial to addressing the 94% of car accidents that are caused by human error,” said Lauren Smith, FPF’s Connected Cars Policy Counsel. “But we need to foster transparency and communication around consumer data use in order to deploy them responsibly. Conversations between lawmakers, consumers, and businesses such as those happening tomorrow need to go beyond the current day and focus on building trustworthy data practices—and communicating them—as vehicles advance. We think that explaining cars’ data-transmitting devices and flows is an important first step.”
This infographic accompanies a project FPF launched earlier this year, a first-of-its kind consumer guide to Personal Data in Your Car. The Guide includes tips to help consumers understand the new technologies powered by data inside the car. It describes common types of collected data, the Privacy Principles that nearly all automakers have committed to, and includes a “privacy checklist” for renting or selling a car. Did you delete your synced contacts list? How about your garage door programming? And don’t forget to wipe your home address on that navigation system! These easy, simple steps can help consumers protect their own data and start thinking about the types of information involved in today’s new mobile ecosystem.
The Future of Privacy Forum (FPF) is a non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.
Federal Trade Commission: COPPA Applies to Connected Toys
This week, the Federal Trade Commission (FTC) updated its guidance on COPPA, the Children’s Online Privacy Protection Act, to clarify that the 1998 statute applies not just to websites and online service providers that collect data from children, but also to Internet of Things devices, including children’s toys. The updated guidance has been applauded by advocates, and is a welcome clarification that COPPA’s strong protections of COPPA apply to toys like Hello Barbie, Dino, and Fischer Price’s Smart Toy. The guidance acknowledges the potential harm to children of deceptive data practices, writing “when companies surreptitiously collect and share children’s information, the risk of harm is very real, and not merely speculative.”
In December 2016, Future of Privacy Forum and Family Online Safety Institute published “Kids & The Connected Home: Privacy in the Age of Connected Dolls, Talking Dinosaurs, and Battling Robots,” an early analysis of the privacy and security implications of connected children’s toys. At the time, some advocates were calling for a legal update to cover the unique issues of screen-less dolls and teddy bears that might collect information from children. In our white paper, we were one of the first to analyze COPPA in the context of children’s toys and concluded that it almost certainly already applied to the wide range of Internet-connected toys on the market:
“Although COPPA was written long before a mainstream market for connected toys existed, there is a growing consensus that the federal statute applies to the wide range of modern toys that connect to the Internet. Most connected toys available today connect to the Internet through a mobile app or other mechanism … and it is well-established that COPPA applies to Internet-connected devices and platforms, including smartphones, tablets, and apps. The FTC is vested with the legal authority to interpret COPPA, and it has promulgated more detailed requirements in the COPPA Rule. COPPA applies to any provider (“operator”) of “a Website or online service directed to children, or any operator that has actual knowledge that it is collecting or maintaining personal information from a child . . .”. Although the FTC has not yet taken an enforcement action against a connected toy operator, the Commission has stated that the term “online service” broadly covers any service available over the Internet or that connects to the Internet or a wide-area network.”
Although COPPA’s protections are strong, we recommend that providers of connected toys go even farther in protecting sensitive information collected from children, and discuss suggested best practices. Privacy-conscious steps include:
A full privacy policy on a toy’s box is not likely to be helpful, but some sort of cue will help parents decide before purchasing whether they are comfortable with the toy or whether they would like to do more research. For example, a toy’s packaging could say: “Parents, this toy will require that you create a personal online account in order to access all features” or “Parents, this toy will require your permission to use your child’s information to bring the toy to life!”.
Companies should invest in developing creative and intuitive ways to alert children and parents when data is being collected or transmitted—including glyphs, and other visual, audio, and haptic cues.
It is important to establish strong data security practices, including: implementing strong encryption standards (HTTPS / TLS) so that the toy will not send personal information over insecure channels, or store personal information in an insecure format on the toy itself; ensuring that technical safeguards prevent the toy from communicating with unauthorized devices or servers; and avoiding the creation of passwords that cannot be changed by users or the use of the same default password for all toys.
Children’s personal information, and parents’ ability to make informed, meaningful choices, should be given the highest level of legal protection. The FTC’s updated guidance represents an important step towards this goal, as well as towards protecting privacy in the growing Internet of Things.
The Top 10: Student Privacy News (May – June 2017)
The Future of Privacy Forum tracks student privacy news very closely, and shares relevant news stories with our newsletter subscribers.* Approximately every month, we post “The Top 10,” a blog with our top student privacy stories.
The Top 10
FPF has relaunched FERPA|Sherpa! The site now includes:
A chart of all state student privacy laws passed since 2013;
A searchable resource bank with over 400 education privacy resources, allowing reference searches by who the resource is aimed at, the type of resource, and tags like “FERPA” or “parental rights”; and
As reported in the previous newsletter, both the House and Senate have introduced the College Transparency Act, which would overturn the current federal ban on having a student-level data system at the U.S. Department of Education. There was a hearing on the CTA in the House, and many people continue to weigh in on whether the CTA is a good or badidea.
There have been a couple big breaches in higher ed in the past month, including a major breach at the University of Oklahoma where a student journalist discovered that she was able to access sensitive information like student financial aid records and grades through the school’s use of Microsoft Delve. Following the discovery of the breach, the U.S. Department of Education contacted the school to “further assess the institution’s compliance with its data security safeguard requirements according to the Gramm-Leach-Bliley Act.” Don’t know much about that law? Check out the FSA’s letter (and second letter) to institutions.
“Schools are watching students’ social media, raising questions about free speech” and privacy, via a feature on PBS NewsHour on June 20. This relates to privacy and surveillance questions I discussed in my report on that topic.
During the 2017 Annual Advisory Board Meeting, FPF issued its first-ever award to Jessica Rich, the former Director of the Bureau of Consumer Protection at the Federal Trade Commission (FTC), for her leadership in responsible data use and consumer privacy.
“We are thrilled to honor Jessica with our inaugural award celebrating leadership in responsible data use and consumer privacy,” said Jules Polonetsky, CEO, Future of Privacy Forum. She is widely credited with building the FTC’s privacy program from a small team in the 1990s to the signature program that it is today.”
In her role as Director, Jessica managed over 450 attorneys, investigators, and support staff charged with stopping consumer fraud and false advertising, and protecting consumers’ privacy. Under her tenure, the Bureau brought a series of major law enforcement actions to halt ongoing law violations. The Bureau also issued groundbreaking reports on data brokers, the Internet of Things, Cross Device Tracking, Big Data, mobile security, and kids’ apps.
Prior to being named Director, Jessica served in a number of senior roles at the FTC, including Deputy Director of the Bureau, Associate Director of the Division of Financial Practices, and Acting Associate Director of the Division of Privacy and Identity Protection.
On May 8, 2017, Jessica joined Consumer Reports as its new Vice President of Consumer Policy and Mobilization.
FPF salutes Jessica for her leadership in responsible data use and consumer privacy!
Future of Privacy Forum and the Data Quality Campaign Relaunch the FERPA|Sherpa Education Privacy Resource Center
FOR IMMEDIATE RELEASE
June 6, 2017
Contact: Melanie Bates, Director of Communications, [email protected]
Future of Privacy Forum and the Data Quality Campaign Relaunch the
FERPA|Sherpa Education Privacy Resource Center
Washington, DC – Today, the Future of Privacy Forum (FPF) and the Data Quality Campaign (DQC) relaunched FERPA|Sherpa, the leading resource for information about education privacy issues. Named after the core federal law that governs education privacy, FERPA|Sherpa provides students, parents, schools, ed tech companies, and policymakers with easy access to the resources, best practices, and guidelines that are essential to understanding the complex privacy issues arising at the intersection of kids, schools, and technology.
A chart of all state student privacy laws passed since 2013;
A searchable resource bank with over 400 education privacy resources, allowing reference searches by who the resource is aimed at, the type of resource, and tags like “FERPA” or “parental rights”; and
Blogs from a variety of contributors, including educators, teachers, parents, companies, and other stakeholders.
“FERPA|Sherpa is the best place to easily find the most current, relevant, and authoritative resources regarding student privacy.” said Amelia Vance, Education Policy Counsel for FPF, who runs and creates content for the website. “Stakeholders have created so many great resources and models – some quite recently – and FERPA|Sherpa is the trusted one-stop shop for anyone who wants to access the latest best practices and guidance.”
“Data should be used to open doors, never to close them,” said Aimee Rogstad Guidera, President and CEO of DQC. “Parents want and deserve assurances that their child’s information is used to help them, never to hurt them and that this data is safeguarded and used responsibly and ethically. That’s why we are pleased to partner with the Future of Privacy Forum on the FERPA|Sherpa website to ensure the education sector is prioritizing the effective and responsible use of data in the service of student learning. FERPA|Sherpa provides information, messages, tools, and emerging best practices around safeguarding data to parents, educators, and policymakers so they can be informed actors and advocates for the ethical use of data in education.”
More than at any other time in the evolution of education, data-driven innovations and emerging technologies – such as online textbooks, apps, tablets and mobile devices, and internet-based learning – are bringing advances and critical improvements in teaching and learning, with profound implications.
At the same time, the increased use of vendors and data in schools is matched by the need for heightened responsibility to manage and safeguard student data and implement policies that benefit education and minimize risk. Concerns have been raised about how student data is collected and used in a next-stage learning ecosystem buzzing with social media, mobile devices, central databases, student records, Big Data, and an array of vendors and software. Since 2013, over 100 new student privacy laws have passed in 40 states.
“The Future of Privacy Forum is committed to creating a better landscape for education privacy,” said Jules Polonetsky, CEO of FPF. “The relaunch of FERPA|Sherpa will enable more effective collaboration between stakeholders and better education privacy practices from schools and companies.”
“Technology and the internet are powerful tools for teaching, learning and family-school communication. At the same time, it is imperative that students’ academic and personal information is protected,” said Laura Bay, president of National PTA. “It is a top priority of National PTA to safeguard children’s data and make certain that parents have appropriate notification and consent as to what and how data is collected and used. National PTA is pleased to collaborate with the Future of Privacy Forum and the Data Quality Campaign to bring the FERPA|Sherpa online resource center to families nationwide to ensure they are knowledgeable about the laws that protect student data as well as students’ and parents’ rights under the laws.”
The new FERPA|Sherpa website builds on FPF’s work to ensure the responsible use of student data and education technology in K-12 and higher education, helping educators with resources and information, and seeking inputs from all stakeholders to ensure privacy while allowing for effective data and technology use in education. FERPA|Sherpa initially launched in spring 2014.
FPF and the Data Quality Campaign are proud to support responsible education technologies in order to promote successful student outcomes. If you have questions or resources that you think should be part of FERPA|Sherpa, please contact Amelia Vance at [email protected].
June 22nd Event: Ensuring Individual Privacy in a Data Driven World
Criteo and Future of Privacy Forum are pleased to invite you to an exceptional conference gathering a very high-level selection of regulators, lawyers, advertisers, and publishers to discuss individual privacy in a data driven world.
Save the date for Thursday, June 22, 2017 from 8:30 am to 8:00 pm.
Future of Privacy Forum, Washington & Lee University School of Law, and the International Association of Privacy Professionals recently collaborated in a Call for Papers focused on the privacy impact of current and projected technological advancements, focusing on the transparency, sharing, and algorithmic implications of data collection and use – topics identified in the National Privacy Research Strategy.
The accepted papers from this call introduce new thinking by taking a closer look at how data flow maps can be leveraged to increase data processing transparency and privacy compliance in the enterprise, how the market has either created or failed to create privacy enhancing standards, and rethinks the traditional notice and consent model in the context of real time M2M communication. These papers have been published in the latest issue of the Washington & Lee Law Review, Online Roundtables. The winning papers are:
The authors of these papers will be sharing their work at a series co-hosted by the FPF Capital-Area Academic Network and IAPP’s Washington D.C. KnowledgeNet Chapter. The first of these series will be held on Tuesday, June 6th featuring Chetan Gupta, CIPP/US, UC Berkeley School of Law who will discuss conclusions from his research on how individuals can have an impact on the market’s adoption of privacy standards and security technologies for the tools we use every day. Following Mr. Gupta, we will be joined by Carrie A. Goldberg, Esq. for a question and answer session about her work focused on justice for individuals who are under attack. Carrie is the founder of C.A. Goldberg, PLLC, a law firm operating out of Brooklyn, New York, that focuses on internet privacy and abuse, domestic violence, and sexual consent.
There are many lessons to learn from the spread of the WannaCry ransomware attacks across the globe. One lesson that needs more attention is the danger that exists when a government attempts to create mandatory backdoors into computer software and systems.
The ransomware attacks began May 12 and soon spread to over 150 countries and over 10,000 organizations, encrypting files and demanding payment in the online currency Bitcoin for the hackers to unlock those files. The attacks contained relatively unsophisticated ransomware. By contrast, the software that spread the ransomware from system to system was very sophisticated, based on the EternalBlue exploit that was stolen from the National Security Agency and leaked in April by a group called the Shadow Brokers.
An initial lesson is to remind us that leaks can and do happen from intelligence agencies, from Edward Snowden, through the publication of CIA hacker code, to the Shadow Brokers release of NSA hacker tools. In an era where leaks happen at scale and get disseminated globally, agencies face a “declining half life of secrets,” and must anticipate that their actions and techniques will be made public far sooner than historically was true.
An important lesson picked up by tech policy experts has been the need to improve what is called the “vulnerabilities equities process” (VEP). The NSA has long had this process to weigh the benefits of a spying tool (such as breaking into an adversary’s computer system) with the costs (such as leaving civilian computers open to the same attack). In 2013, I was part of President Obama’s NSA Review Group, and that administration accepted our recommendation to shift the VEP to the White House and involve more agencies and perspectives, especially to highlight the risk to the economy and our own infrastructure from vulnerabilities that are not patched.
Experience with WannaCry shows, however, that improving the VEP is not enough to create good security. After the government learned about the Shadow Brokers theft, it alerted Microsoft to the vulnerability exploited by the ransomware. Microsoft released a patch in March, before the Shadow Brokers published the key attack mechanism. Nonetheless, Britain’s National Health Service and the other victims world-wide did not update their systems in time. These failures show the need to update quickly and systematically, an issue whose importance will only increase as myriad devices connect online as part of the Internet of Things, where many devices have no mechanism for updates.
Along with these lessons, however, WannaCry should inform us about the egregious risks that come from mandatory vulnerabilities in software, what are often called “backdoors.” The greatest public attention to backdoors arose when the FBI sought to require Apple to write software that would gain access to an encrypted iPhone in the San Bernardino terrorism case. Apple CEO Tim Cook refused, saying “There have been people that suggest that we should have a back door. But the reality is if you put a back door in, that back door’s for everybody, for good guys and bad guys.” Strong encryption is permitted even under the 1994 U.S. law that requires phone companies to build their networks to respond to court orders. As the ACLU’s Chris Soghoian has emphasized, that law “explicitly protected the rights of companies that wanted to build encryption into their products – encryption with no backdoors, encryption with no keys that are held by the company.”
The risk of government-mandated backdoors goes far beyond the U.S., however. Late last year, the United Kingdom passed the Investigatory Powers Act, which allows the government to compel communications providers to remove “electronic protection applied … to any communications or data.” The Electronic Frontier Foundation reports that they don’t believe the U.K. government has taken advantage of this requirement to break encryption yet, but the law is now on the books and companies could face severe consequences for non-compliance.
Even more broadly, China’s new cybersecurity law can be read to require encryption backdoors, Brazil temporarily blocked the encrypted app WhatsApp when seeking access to user data, the European Union Justice Minister is considering measures to force companies to cooperate with law-enforcement requests, and India has proposed sweeping encryption legislation that would require backdoor acces as well.
The difficulty with these mandated backdoors, however, is that a computer vulnerability that exists in China, Brazil, or India typically will exist in the United States as well. In all of these countries, users rely on largely the same hardware and software – the same phones, laptops, operating systems, and applications.
The WannaCry attack thus teaches us lessons about the likelihood of leaks, the need for a better vulnerabilities process, and the importance of better software updating.
Most importantly, however, it teaches us that a backdoor required in one nation opens up the data and devices of users everywhere in the world. Over 150 countries suffered the effects of the WannaCry ransomware. Over 150 countries will also have their systems exposed if any one country succeeds in mandating a backdoor in the devices and software upon which we all rely.
Peter Swire teaches cybersecurity at the Georgia Tech Scheller College of Business, and is a Senior Fellow at the Future of Privacy Forum.