Key FPF-curated background resources – policy & regulatory documents, academic papers, and technical analyses regarding brain-computer interfaces are available here.
Recently, Elon Musk livestreamed an update for Neuralink—his startup centered around creating brain-computer interfaces (BCIs). BCIs are an umbrella term for devices that detect, amplify, and translate brain activity into usable neuroinformation. At the event, Musk unveiled his newest BCI, implanted in the brain of a pig.
Musk predicts that future BCIs will not only “read” individual’s brainwaves, but also “write” information into the user’s brain to accomplish such goals as identifying medical issues and allowing users with limited movement to type using their thoughts. Musk even mentioned the long-term prospect of downloading memories. Explaining Neuralink’s newest device, “In a lot of ways,” Musk said, “It’s kind of like a Fitbit in your skull, with tiny wires.”
It can be hard to separate the facts from the hype surrounding cutting edge product announcements. But it is clear that brain-computer interfaces are increasingly used by hospitals, schools, individuals, and others for a range of purposes. It is equally clear that BCIs often use sensitive personal data and create data protection risks.
Below, we explain how BCI technologies work, how BCIs are used today, some of the technical challenges associated with implementing the technologies, and the data protection risks they can create. An upcoming FPF paper examines these issues at greater length.
We conclude with five important recommendations for developers intent on maximizing the utility and minimizing the risks of BCIs:
(1) Employ Privacy Enhancing Technologies to Safeguard Data;
(2) Ensure On/Off User Controls;
(3) Enshrine Purpose Limitation;
(4) Focus on Data Quality; and
(5) Promote Security.
What are Brain-Computer Interfaces? //
Some BCIs are “invasive” or “semi-invasive”—implanted into a user’s brain or installed on the brain surface—like Neuralink. But many others are non-invasive, commonly utilizing external electrodes, which do not require surgery. The three main types of BCIs are:
1). Invasive BCIs, which are installed directly into the wearer’s brain, are typically used in the medical context. For example, clinical implants have been used to improve patients’ motor skills. Invasive implants can include devices like an electrode array called a Utah array, and new inventions like neural dust and neural lace which drape over or are inserted into multiple areas within the brain.
2). Semi-invasive BCIs are often installed on top of the brain, rather than into the brain itself. Such BCIs rely on electrocorticography (ECoG), in which electrodes are attached on the exposed surface of the brain to measure electrical activity of the cerebral cortex. ECoG is most widely used for managing epilepsy. The Neuralink device is promoted as a coin-sized semi-invasive implant that attaches to the surface of a user’s brain and sends signals to an external device.
3.) Non-invasive BCIs typically rely on neuroinformation gathered from electroencephalography (EEG). EEG is a common method for recording electrical activity, with electrodes placed on the scalp to measure neural activity. Other non-invasive techniques use brain stimulation. For example, transcranial direct current stimulation (tDCS) sends low level currents to the frontal lobes. While non-invasive BCIs might be an attractive option for headset integration, “non-invasive” is not synonymous with harmless. BCIs are a relatively new technology with the potential for health, privacy, and security risks.
Individuals may be uneasy about devices that can read a user’s thoughts, or alter the composition of these thoughts. However, it is important to note that today’s BCIs do not read or modify thoughts—instead, they rely on machine learning algorithms that have been trained to recognize brain activity in the form of electrical impulses and make inferences about emotional states, actions, and expressions.
Regardless of the technique used, collecting and processing brain signals to derive useful neuroinformation can be a challenging process. Most data derived via BCIs is noisy (especially in the case of non-invasive applications), and creating computer systems that can identify and remove noise is a complex and cumbersome undertaking. After actionable signals are gathered, various artificial intelligence and machine learning models are applied to extract and classify useful neuroinformation. The final task is to accurately translate and match neuroinformation to the desired outcome or action—a process researchers are still attempting to master.
The Benefits of BCI Technologies & Top of Mind Privacy and Security Risks //
BCIs are, and will continue to be, deployed in doctor’s offices, schools, workplaces, and within our homes and communities. As a novel interface, BCIs hold incredible promise across numerous sectors, from health to gaming. However, each sectoral use comes with a host of unique privacy and security risks.
In the health and wellness sector, BCIs are used to monitor fatigue, restore vision and hearing, and control accessibility devices like wheelchairs, as well as help power prosthetic limbs. The propensity of BCIs to not only read brain signals but to potentially stimulate activity in the brain can create benefits for patients. In the diagnostic and treatment arena, BCIs lessen and sometimes eliminate the need for subjective patient responses. In one NIH-funded study, BCIs were used to detect glaucoma progression over time using objective measurements made by the BCI; this represents a potentially substantial diagnostic improvement over traditional glaucoma assessment, which relied on subjective patient-generated data. In the accessibility space, BCIs have spun off a new generation of neuroprosthetics, or artificial limbs that move in response to patients’ thoughts, and have aided in the creation of BCI-powered wheelchairs.
While health-related BCIs are promising treatments for some patients, they can be vulnerable to security breaches. Recently, researchers showed that hackers, through imperceptible noise variations of an EEG signal, could force BCIs to spell out certain words. According to the researchers, the consequence of this security vulnerability could range from user frustration to a severe misdiagnosis.
In the gaming context, non-invasive wearables outfitted with EEG electrodes provide players with the ability to play games and control in-game objects using their thoughts. Games such as The Adventures of Neuroboy allow players to move objects in the game using their thoughts, which are measured through an EEG-fitted cap. Companies like Neurable are looking to push the limits of user interaction even further by developing AR/VR headsets outfitted with EEG electrodes that collect brainwave data—this data acts as the primary driver of gameplay. In Neurable’s first demo, Awakening, the player assumes the role of a psychokinetically-gifted child who must escape from a government prison. Through reading the player’s electrical brain impulses, the BCI lets the player choose between a host of objects to escape from prison and advance though the game.
BCIs can make games more immersive for players and give game developers novel tools. But Advances in immersive gaming depend on the collection of neuroinformation, which can lead to heightened privacy risks. Existing immersive games in the AR/VR space often rely on collecting and processing potentially sensitive personal information such as geolocation data and biometric data like the player’s gait and eye movement, as well as audio and video recordings of the player. Future-looking gaming hardware, such as the headset being developed by Neurable, could pair neuroinformation wit vast sets of sensitive personal information, which increases the chances of user identifiability, while potentially revealing sensitive biological information about the player. This data could be used for a range of commercial purposes, from product improvement and personalization to behavioral advertising and profiling.
BCIs can also be found in schools, where companies claim they can measure student attentiveness. For example, BrainCo, Inc. is developing BCI technology that involves students wearing EEG-fitted headbands in class. The students’ neuroinformation is gathered and displayed on a teacher’s dashboard which allegedly provides insight into student attention levels. In the future, BCIs might be deployed in the education arena to aid students with learning disabilities; some schools already employ AR and VR technologies for this purpose. While personalized education metrics can be helpful to students, parents, and teachers, inaccurate BCI data in the education context could lead to false conclusions about student aptitude, and accurate information could put students at risk of disproportionate penalties for inattentiveness or other behavior.
Measuring attentiveness through the use of BCIs is not unique to the education space. Currently, most of the uses of BCIs in the workplace are proposed as ways to measure engagement and improve employee performance during high-risk tasks. BCIs deployed in the employment context can raise the risks of employer surveillance and discrimination (e.g., data about workers’ emotions could lead to penalties, firing decisions, and other actions).
One of the most future-looking applications of BCIs is in the smart cities and communities space. As early as 2014, researchers proposed a prototype for a bluetooth-enabled BCI that could help disabled individuals control and direct a smart car over short distances, increasing individuals’ independence. These prototypes, while still very much a work in progress, open the possibility for more complex uses in the future. The potential benefits are substantial, but these technologies also create risks, including the collection and use of drivers’ neuroinformation in combination with other sensitive data, such as location, when controlling the vehicle. Additionally, mind-controlled cars pose the potential public safety risk of driving a car using often imprecise, opaque, and sometimes inaccurate neuroinformation.
In addition to sector-specific privacy risks, BCIs are generally susceptible to the same drawbacks and potential harms associated with other algorithmic processes. For example, harmful bias, a lack of transparency and accountability, as well as a reliance on faulty training data, can lead to individual and collective losses in opportunity. Implementing accurate autonomous systems presents its own set of challenges such as: whether a particular system is appropriate to achieve a desired outcome; whether the systems are designed (and re-designed) to reduce bias; and whether the system raises ethical or legal risks.
As a novel interface, BCIs raise important data protection questions that should be addressed throughout their development cycle. Below, we put forward just a handful of the high-level recommendations that developers should adhere to when seeking to create inclusive, and privacy-centric, brain-computer interfaces.
Key Recommendations for BCI Development //
Because the collection and use of neuroinformation involves a number of privacy and ethical concerns that go beyond current laws and regulations, stakeholders working in this emerging field should follow these principles for mitigating privacy risks:
(1) Employ Privacy Enhancing Technologies to Safeguard Data–BCI providers should integrate recent advances in privacy enhancing technologies (PETs), such as differential privacy, in accordance with principles of data minimization and privacy by design.
(2) Ensure On/Off User Controls–Wherever appropriate, BCI users should have the option to control when their devices are on or off. Some devices may need to always be on in order to fulfill their functions—for example, a BCI that treats a neurological condition. However, when being always on is not an essential feature of the device, users should have a clear and definite way to turn off their device. As with other devices, there are considerable privacy risks when a BCI is always gathering data or can be turned on unintentionally.
(3) Enshrine Purpose Limitation — BCI providers should state the purpose for collecting neuroinformation and refrain from using that information for any other purpose absent user consent. For example, if an educational BCI gauges student attentiveness for the purpose of helping a teacher engage the class, it should not use attentiveness data for another purpose—like ranking student performance—without express and informed consent. Additionally, BCI providers should also consider limiting unnecessary cross-device collection.
(4) Focus on Data Quality — Providers should strive to use the most accurate data collection processes and machine-learning tools available to ensure accuracy and precision. Algorithmic explainability and reproducibility of results are critical components of accuracy. It is important for BCIs to be both accurate (turning neural signals into correct neuroinformation) and precise (consistently reading the same signals to mean the same thing).
(5) Promote Security — BCI providers should take appropriate measures to secure neuroinformation. BCI devices should be secure against hacking and malware, and company servers should be secure against unauthorized access and tampering. Furthermore, data transfers should be accomplished by secure means, subject to strong encryption.
Moving Forward //
The Future of Privacy Forum is working with stakeholders to better analyze these issues and make recommendations regarding appropriate data protections for brain-computer interfaces. Curated news, resources, and academic papers on BCIs and related topics are available here. We welcome your thoughts and feedback at [email protected] and [email protected].
Additional Resources //
In addition to the resources we curated here; we found the following academic papers, white papers, ethical frameworks, and policy frameworks helpful to understanding the current BCI landscape.
Four Ethical Priorities for Neurotechnologies and AI—by Rafael Yuste, et. al—noting that BCI technology can exacerbate social inequalities and offer corporations, hackers, governments and others new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency and an understanding of individuals as entities bound by their bodies.
Towards New Human Rights in the Age of Neuroscience and Neurotechnology—by Marcello Ienca and Roberto Andorno—assessing the implications of emerging neurotechnology applications in the context of human rights frameworks and suggests that existing human rights may not be sufficient to respond to these emerging issues.
White Papers //
iHuman: Blurring Lines between Mind and Machine—by The Royal Society—arguing that neural interface technologies will continue to raise profound ethical, political, social and commercial questions that should be addressed as soon as possible to create mechanisms to approve, regulate or control the technologies as they develop, as well as managing the impact they may have on society.
OECD Recommendation on Responsible Innovation in Neurotechnology—adopted by the OECD Council in December 2019—this recommendation is the first international standard in this domain. It aims to guide governments and innovators to anticipate and address the ethical, legal and social challenges raised by novel neurotechnologies while promoting innovation in the field.
Standards Roadmap: Neurotechnologies for Machine Interfacing—available through IEEE—noting the need for standards in the BCI arena and providing an overview of the existing and developing standards in the field of neurotechnologies for brain‐machine interfaces. The roadmap is broken into five man topics: (1) the science behind sensing technologies, (2) feedback mechanisms, (3) data management, (4) user needs, and (5) performance assessments of BCIs.
FPF Submits Comments Regarding Data Protection & COVID-19 Ahead of National Committee on Vital and Health Statistics Hearing
Yesterday, FPF submitted comments to the National Committee on Vital and Health Statistics (NCVHS) ahead of a Virtual Hearing of the Subcommittee on Privacy, Confidentiality, and Security on September 14, 2020.
The hearing will explore considerations for data collection and use during a public health emergency, in light of the deployment of new technologies for public health surveillance to tackle the nationwide COVID-19 pandemic. The Subcommittee intends to use input from expert comments and testimony to inform the development and dissemination of a toolkit outlining methods and approaches to collect, use, protect, and share data responsibly during a pandemic.
In our comments, we highlighted FPF’s recent work exploring the ethical, privacy, and data protection challenges posed by the COVID-19 crisis, and we shared resources that address a number of issues raised by the Committee in the Request for Public Comments. In particular, we provided FPF resources that address: (1) the application of the Fair Information Practice Principles (FIPPs) and proper scope of data collection, analysis, and sharing in an emergency; (2) differences in standards at the local, state, and federal levels; and (3) technical understanding of location data and the design of mobile apps.
In recent months, FPF’s Privacy and Pandemics Series has convened public health experts, academics, advocates, representatives of industry, and other experts to discuss how to create frameworks to safeguard the responsible use of data while creating and employing new tools, such as contact tracing apps. FPF has also developed educational resources, such as an infographic to demonstrate how mobile devices interpret signals from their surroundings, including GPS satellites, cell towers, Wi-Fi networks, and Bluetooth, to generate precise location measurements. We have also explored the differing standards that are arising at the state, federal, and local levels for how to respond to a public health emergency while protecting privacy and personal data. Addressing the proper scope of data collection, analysis, sharing and retention in an emergency, FPF also recently testified before the U.S. Senate Committee on Commerce, Science, & Transportation; and has also provided input at a Public Work Session of the Washington State Senate Committee on Environment, Energy & Technology.
By informing policymakers about the risks and regulatory gaps associated with location, health, wellness, and other data collected and used during a public health emergency, we hope to promote informed decision-making and regulation. We look forward to continuing to provide resources on the federal and state level to legislators and public health authorities on the responsible and ethical use of data in the fight against COVID-19.
FPF Presents @ RightsCon 2020: “Frontiers in health data privacy: navigating blurred expectations across the patient-consumer spectrum”
The patient-consumer spectrum is a growing concept in which healthcare is rapidly transitioning from a periodic activity in fixed, traditional health care settings to an around-the-clock activity that involves the generation, use, and integration of data reflecting many aspects of individuals’ lives and behaviors. Accompanying this spectrum are blurred distinctions between traditional versus consumer-generated health information and differences in expectations of how health information across this spectrum should be protected or treated.
On July 27, 2020, during the RightsCon 2020 virtual conference, the Future of Privacy Forum’s (FPF’s) Health Policy Counsel and Lead, Dr. Rachele Hendricks-Sturrup sat down with three health data governance and policy expert panelists to explore the privacy and policy implications across the broadening patient-consumer spectrum:
Teresa Patraquim da Conceição, Head Privacy Team – International, Novartis
An audience poll was taken to garner the panel audience’s perspectives regarding the privacy of consumer-generated versus traditional health care data. Just over half (52%) of the audience members who participated in the poll felt that the privacy of consumer-generated health data should be treated the same as traditional health care data:
These split results highlight the need to discuss data privacy and rights across the growing patient-consumer spectrum. The panelists took on this challenge and offered the following key takeaways:
Data availability engenders discovery and collaboration… at a price
Context is critical
Smart regulation is key to protection
Informed consent remains important
Trust in data use requires transparency and governance
Data subject representation, rights, and respect are paramount
Data justice means addressing the digital divide
Dr. Hendricks-Sturrup and the panelists concluded that, in order to successfully navigate blurred expectations of privacy across this spectrum and make progress toward establishing meaningful legal and policy frameworks and best practices, diverse stakeholders from industry, academia, and civil society must be engaged and barriers to their collaboration must be addressed.
To learn more about the FPF Health Initiative, contact Dr. Rachele Hendricks-Sturrup at [email protected].
Call for Position Statements on Responsible Uses of Technology and Health Data During Times of Crisis
Event Overview
The Future of Privacy Forum, in collaboration with the National Science Foundation, Duke Sanford School of Public Policy, SFI ADAPT Research Centre, Dublin City University, and Intel Corporation presents Privacy & Pandemics: Responsible Uses of Technology and Health Data During Times of Crisis — An International Tech and Data Conference, including a two day virtual workshop on October 27-28, 2020 to explore the value and limits of data and technology in the context of a global crisis. At 10 months into the COVID-19 pandemic, what role has tech and data played in combating the crisis, what have we learned about limitations of law, policy and technical tools, and what areas need reform and additional research?
We are soliciting position statements from leading technologists, scientists, policymakers, data experts, companies, and regulators to assess early conclusions about how data and technology have each played a role in efforts to study, control the spread of, and track COVID-19.
This call invites experts with a perspective on areas such as:
The limits of technology;
Technological advances that are needed;
Ways in which available data fell short;
Areas where access to data was limited;
Challenges in access to data or interoperability issues;
Successes and failings of current tools;
The role of de-identification, including where it fell short;
The role of privacy engineering in developing trusted data and technology based responses to a pandemic;
The role of Data Protection/Privacy Impact Assessments in shaping privacy protective technical solutions;
Tech and data impacts on disparities and equity concerns;
Role of trusted intermediaries in supporting data sharing;
Regulatory impacts on development/deployment of technology, or analysis of available data;
Tensions between privacy and the need for data access for immediate crisis management.
We invite you to submit a 500-1000 word position statement to be considered for inclusion in the upcoming workshop. Authors of accepted submissions will be offered a $1,500 (US) stipend to participate in a relevant workshop session or invited to present a “firestarter” at the virtual event to be held October 27th and 28th 2020. Accepted submissions will be distributed to workshop attendees in advance for review, assessment, and discussion. The Planning Committee will also organize a number of invited presentations.
A workshop report will be prepared and used by the National Science Foundation to help set direction for the Convergence Accelerator 2021 Workshops, speeding the transition of convergence research into practice to address grand challenges of national importance.
We are inviting submissions of position statements to catalyze conversations around the future of privacy and technology during times of crisis. We are seeking original, provocative, well-argued statements of approximately 500-1000 words.
We are interested in statements addressing specific, practical, identified challenges faced by academic researchers, public health experts and agencies, technologists, policymakers, industry and others who have played a role in responding to the COVID-19 crisis.
Works by undergraduate students, graduate students, or unaffiliated scholars, as well as from individuals with academic and/or corporate affiliations are all welcome. Works by interdisciplinary teams, specifically those representing the convergence of fields such as engineering, biology, social, and computer sciences are encouraged.
Submission Deadline
Submission of draft position statements: September 30, 2020. **DEADLINE CLOSED**
Submissions should be between 500-1000 words (1-2 pages), excluding references.
Submissions should include a separate cover letter listing all authors, affiliations, and contact information.
Authors agree that position papers may be posted as part of the workshop proceeding and may be referenced in the workshop report.
Reviewer Team
Chief Reviewer: Jules Polonetsky
Special Reviewers: Artificial Intelligence (Dr. Sara Jordan), Biometrics and Digital Identity (Ms. Brenda Leong), Platforms and Advertising Technologies (Ms. Christy Harris), Local Government and Open Data (Ms. Kelsey Finch), Health and Genetics (Dr. Rachele Hendricks-Sturrup, Legislative and Regulatory (Ms. Stacey Gray), Tech Policy (Limor Shmerling Magazanik), Youth and Education (Amelia Vance), Mobility and Location (Chelsey Colbert), Global privacy and personal data governance (Dr. Gabriela Zanfir-Fortuna)
Review Process
Position statements will be reviewed by at least three reviewers, including a subject matter expert. Reviewers will determine if a position statement will move forward for final decision on inclusion in the workshop.
Additional Information
For more information on this effort, including submission instructions, event details, or other questions, please contact Christy Harris at [email protected].
California’s SB 980 Would Codify Strong Protections for Genetic Data
Author: John Verdi (Vice President of Policy)
This week, SB 980(the “Genetic Information Privacy Act”) passed the California State Assembly and State Senate, with near unanimous support (54-10 and 39-0). If signed by the Governor before the Sept. 30 deadline, the law would become the first comprehensive genetic privacy law in the United States, establishing significant new protections for consumers of genetic services.
As we previously wrote and testified, the Genetic Information Privacy Act incorporates many of the protections in FPF’s 2018 Privacy Best Practices for Consumer Genetic Testing Services. Those Best Practices were drafted and published over the course of 2018 in consultation with a multi-stakeholder group of technical experts, scientists, civil society advocates, leading consumer genetic and personal genomic testing companies, and with input from regulators including the Federal Trade Commission (FTC) and the Department of Health and Human Services (HHS).
Leading genetic testing companies have adopted the Best Practices, making them enforceable by the FTC and state AGs; SB 980 would extend safeguards to users of other genetics companies, protecting consumers and building trust in the industry.
Below we describe 1) that process and results of FPF’s 2018 efforts; and 2) the significance of SB 980 as compared to existing laws and the voluntarily adopted Best Practices.
FPF’s 2018 Stakeholder Process//
In 2018, Future of Privacy Forum conferred with leading genetics services and other experts to explore ways to address consumer privacy concerns related to genetics services. At the time, concerns were emerging in response to the rapid growth of the consumer genetics industry, and highly publicized cases of law enforcement access to genetic data, including the Golden State Killer investigation.
As a non-profit dedicated to convening divergent stakeholders to create workable best practices for emerging technologies, we solicited and received the input of scientists, consumer privacy advocates, government stakeholders, and other experts. FPF published the resulting Best Practices at the end of July 2018.
Since that time, some, but not all, of the direct-to-consumer genetics companies have voluntarily adopted FPF’s Best Practices. Some companies have chosen not to adopt the Best Practices, or to adopt only certain provisions, while others are supportive but have chosen not to formally incorporate the provisions of the Best Practices into their policies. Privacy policies and other voluntary legal commitments can be enforced by the Federal Trade Commission and State Attorneys General.
Why SB 980 is Significant //
If signed by the Governor, SB 980 would be a landmark law for genetic privacy, going beyond existing federal and state laws as well as self-regulation. Although the federal Genetic Information Nondiscrimination Act (GINA) prohibits certain types of discrimination based on genetic information, it does not provide comprehensive privacy protections for the collection of such data or the many ways that it can be used, sold, or shared (including for advertising or law enforcement purposes).
Similarly, the handful of states that have heretofore addressed genetic information privacy have not established comprehensive protections. For example, some states have enacted “mini-GINAs” (including California), or extended its protections to discrimination in life insurance, disability, or long term care (Florida). Somes states have limited law enforcement access (Nevada), and at least one has attempted to take a more comprehensive approach while recognizing genetic information as the property of the consumer (Alaska).
In contrast, SB 980 would establish broad, comprehensive consumer protections for genetic information. The protections go significantly beyond those that exist for other types of personal information in California under the California Consumer Privacy Act (CCPA), an approach that is justified given the unique sensitivity of genetic information. In particular, genetic information has the ability to reveal intimate information about health and familial connections, and is challenging to de-identify. The bill also contains certain aspects that are unique to the consumer genetics industry, such as the requirement that biological samples be destroyed upon request.
Similarly, SB 980 would go beyond FPF’s Best Practices by directly regulating the entire sector, rather than only the companies that have voluntarily chosen to adopt the Best Practices. Furthermore, although voluntary commitments can be enforced by the Federal Trade Commission (FTC) and others, such enforcement is necessarily limited to unfair and deceptive trade practices, and does not always allow for financial penalties. In contrast, SB 980 would establish civil penalties of up to $1,000 (for negligent violations) or $10,000 (for willful violations)
Penalties could add up quickly, as they are calculated on a per violation, per consumer basis.
Conclusion //
Genetic information carries the potential to empower consumers interested in learning about their health and heritage, and to fuel unparalleled discoveries in personalized medicine and genetic research. Given the Future of Privacy Forum’s mission to convene divergent stakeholders towards workable privacy practices for emerging technologies, it continues to be rewarding to play a role in shaping the leading practices for consumer genetic information. We are optimistic that SB 980 represents a major step forward for consumer rights.
How the Student Privacy Pledge Bolsters Legal Requirements and Supports Better Privacy in Education
The Student Privacy Pledge is a public and legally enforceable statement by edtech companies to safeguard student privacy, built around a dozen privacy commitments regarding the collection, maintenance, use, and sharing of student personal information. Since it was introduced in 2014 by the Future of Privacy Forum (FPF) and the Software and Information Industry Association, more than 400 edtech companies have signed the Pledge. In 2015, the White House endorsed the Pledge, and it has influenced company practices, school policies, and lawmakers’ approaches to regulating student privacy. Many school districts use the Pledge as they review prospective vendors, and it is aligned with—and has broader coverage than—the most widely-adopted state student privacy law.*
We are proud that 436 companies have signed the Pledge, with well over 1,000 applications since June 2016. FPF reviews each applicant’s privacy policy and terms of service to ensure that signatories’ public statements align with the Pledge commitments. If an applicant’s policies do not align with the Pledge, we work with the company to bring them into line with the Pledge. We also work with applicants to ensure that they understand the commitments they are making when they become a Pledge signatory. This process may result in the applicant bringing internal compliance and legal resources to bear in ways they previously did not—an effect that increases accountability. Nearly every company that applies ends up altering their privacy practices and/or policies to become a Pledge signatory.
The Student Privacy Pledge is a voluntary promise, not a law that applies to everyone. But once companies sign the Pledge, the Federal Trade Commission (FTC) and state Attorneys General (AG) have legal authority to ensure they keep their promises. The FTC and state AGs have a track record of using public commitments like the Student Privacy Pledge as tools to enforce companies’ privacy promises. These legal claims arise from the intersection between Pledge promises and Consumer Protection Unfair & Deceptive Acts & Practices (UDAP) statutes—without the Pledge commitments, it would be much more difficult for the FTC and AGs to bring enforcement actions under state and federal UDAP laws. In the absence of a comprehensive federal consumer privacy law, the Pledge provides an important and unique means for privacy enforcement; it complements state and federal student privacy laws that directly regulate companies and schools.
In addition to enforcement at the federal and state levels, many schools require vendors to adhere to contracts that are modeled after or heavily mirror the Pledge, and schools have contractual rights to enforce these promises. If companies are found to break their commitments, schools can force a vendor to change practices, terminate contracts with the company, and sue for damages.
When FPF learns of a complaint about a Pledge signatory, we analyze the issue and reach out to the signatory to understand the complaint, the signatory’s policies and practices, and other relevant information. We typically work with the company to resolve any pledge-covered practices that do not align with the Pledge. We seek to bring signatories into compliance with the Pledge rather than remove them as signatories – an action that could result in fewer privacy protections for users, as a former signatory would not be bound by the Pledge’s promises for future activities.
One of the most common misunderstandings about the Pledge is the assumption that the Pledge applies to all products offered by a signatory or used by a student. However, the Student Privacy Pledge applies to “school service providers”—companies that design and market their services and devices for use in schools. When a company offering services to both school audiences and general audiences becomes a Pledge signatory, the Pledge commitments only apply to the services they provide to schools. Companies selling tools or providing services to the general public are not obligated to redesign these products because they sign the Pledge. This is consistent with most state student privacy laws and proposed federal bills.
The Pledge is by no means a replacement for pragmatic updates to existing student privacy laws and regulations, or for a comprehensive federal privacy law that would cover all consumers, which FPF supports. The Pledge is a set of commitments intended to build transparency and trust by obligating signatories to make baseline commitments about student privacy that can be enforced by the Federal Trade Commission and state attorneys general. The Pledge is not intended to be a comprehensive privacy policy nor to be inclusive of all the many requirements necessary for compliance with applicable federal and state laws. With that said, most signatories take the Pledge because they wish to be thoughtful and conscientious about privacy.
In 2019, we decided to analyze the Pledge in light of the evolving education ecosystem, incorporating what we’ve learned from reviewing thousands of edtech privacy policies and engaging directly with stakeholders and reviewing the 130 state laws that have passed since the Pledge was created. We are excited to apply this knowledge to the Student Privacy Pledge with Student Privacy Pledge 2020, being released this fall.
* The Student Privacy Pledge’s commitments are echoed in the most commonly passed student privacy law aimed at edtech service providers, California’s Student Online Personal Information Protection Act (Cal. Bus. & Prof. Code § 22584). A version of this law has been enacted by several states, including: Arizona (Ariz. Rev. Stat. § 15-1046), Arkansas (Ark. Code Ann. § 6-18-109), Connecticut (Conn. Gen. Stat. §§ 10-234bb-234dd), Delaware (Del. Code. Ann. tit. 14, §§ 8101), Georgia (Ga. Stat. § 20-2-660), Hawaii (Hi. Rev. Stat. §§ 302A-499-500), Iowa (Iowa Code § 279.70), Kansas (K.S.A. 72-6312), Maine (Me. Rev. St. Ann. 20-A § 951), Maryland (Md. Educ. Code § 4-131), Michigan (Mich. Comp. Laws § 388.1295), Nebraska (Neb. Rev. Stat. § 79-2,153), New Hampshire (NH. St. § 189:68-a), New Jersey (Assembly Bill 4978, signed into law Jan. 2020), North Carolina (N.C. Gen. Stat. § 115C-401.2), Oregon (Or. Rev. Stat. Ann § 336.184-187), Texas (Tex. Educ. Code § 32.151), Virginia (Va. Code Ann. § 22.1-289.01), and Washington State (Wash. Rev. Code § 28A.604).
FPF Presents Expert Analysis to Washington State Lawmakers as Multiple States Weigh COVID-19 Privacy and Contact Tracing Legislation
In response to the ongoing public health emergency, over the past few months state legislatures in the United States have diverted their resources towards establishing state and local reopening plans, allocating federal aid, and promoting public trust and public participation by addressing concerns over privacy and civil liberties.
Many states have introduced bills which would govern collection, use, and sharing of COVID-19 data by a range of entities, including government actors and commercial entities. Unless appropriate guardrails are put in place, data collected by governments through contact tracing could be used in unexpected, inappropriate, or even harmful ways. This has frequently been cited as a factor that undermines the likelihood of over-policed and undocumented individuals to participate in contact tracing, and is also one of the reasons why the Google-Apple Exposure Notification API is only available for decentralized apps.
Below, we discuss FPF’s participation in a July 28th COVID-19 Public Work Session hosted by the Washington State Senate Committee on Environment, Energy & Technology, and the wide range of active COVID-19 legislation in state legislatures, including New York, New Jersey, and California.
Washington Public Work Session (July 28)
On July 28, 2020, the Washington State Senate Committee on Environment, Energy & Technology held a Public Work Session to discuss government uses of data and contact tracing technologies. The Committee invited guest experts to give presentations, including FPF’s Senior Counsel Kelsey Finch, Consumer Reports’ Justin Brookman, and the Washington State Department of Health.
In FPF’s presentation, we recommended for policymakers and technology providers to follow the lead of public health experts, and outlined key considerations when deciding how to design and implement digital contact tracing tools. Important considerations exist between public health, privacy, accuracy, effectiveness, equity, and trust; and best practices are emerging around: (1) transparency about data collection and sharing; (2) purpose & retention limitations (3) privacy impact assessments; (4) prioritization of accessibility; (5) SDK caution; (6) interoperability; and (7) security. Recognizing widespread consensus that apps ought to be voluntary, Ms. Finch also emphasized the need to find ways to promote and maintain public trust.
These recommendations align with FPF’s recent report to promote responsible data use; and FPF’s April 2020 testimony on the topic of “Enlisting Big Data in the Fight Against Coronavirus,” convened by the U.S. Senate Committee on Commerce, Science, and Transportation.
Read FPF & BrightHive Report: “Digital Contact Tracing: A Playbook for Responsible Data Use”
See FPF Infographic: “Understanding the “World of Geolocation Data”
Legislative Trends in the States
State governments, private employers, and schools are increasingly turning to new technologies and digital solutions to help address the ongoing public health emergency. Over 20 states are considering, developing, or implementing decentralized Bluetooth-based apps based on the Apple Google Exposure Notification API, an effort supported nationally by the Association of Public Health Laboratories for individuals to receive exposure alerts even when they travel across state borders.
Meanwhile, most state legislatures have suspended their sessions for the year, with only some states remaining in regular session, and others convening special sessions to address the pandemic. Contact tracing efforts (both manual and digital) rely not only on fast and reliable testing, but also on public participation and trust — a key public health consideration that is leading many states to consider how they can bolster public promises with strong privacy and data protection laws.
As a result, in some states, COVID-19 privacy bills have already been signed into law, including a few notable new state laws:
Kansas’s H.B. 2016(signed into law June 8, 2020 following a special session) requires participation in state contact tracing to be voluntary and mandates confidentiality and data retention requirements for contact tracing information, and prohibits the use of cellphone location data to “identify or track, directly or indirectly, the movement of persons” for contact tracing purposes;
New York’s S 8362 (signed into law June 17, 2020) requires that all contact tracers hired by the state health departments be representative of the cultural and linguistic diversity of the communities in which they serve.
South Carolina’s HJR5202(signed into law June 25, 2020) prohibits the local health department from using mobile apps created for contact tracing.
In other states, COVID-19 privacy legislation has been introduced and remains active — with some bills appearing likely to pass in upcoming weeks. Most notably, active bills in New York, New Jersey, and California, if passed, would create a range of new requirements for both private sector companies and government entities with respect to COVID-19 related health information.
New York
In New York, which will remain in session through the end of 2020, several COVID-19 privacy bills have gained traction in recent months, including:
NY A10500, which passed the New York Senate on July 23, would mandate the confidentiality of COVID-19 contact tracing information and prohibit access to such data by law enforcement or for immigration purposes. The ACLU and other community organizations support the bill.
NY S8448, which passed the Senate on July 23, would regulate the collection and use of emergency health data and the use of COVID-19 technology. The bill contains transparency requirements, data minimization obligations, retention limitations, and data security obligations for government entities and “third party recipients” of emergency health data. The scope of the bill resembles two federal billsintroduced by Senator Blumenthal and Senator Wicker earlier this year to regulate emergency health data.
New Jersey
In New Jersey, the legislative session runs through the end of 2020. In January, a number of geolocation data bills were introduced that remain technically under consideration (e.g., A 193 and A 5259), in addition to general comprehensive privacy bills (e.g., S269 and A2188). However, New Jersey legislators have since prioritized urgent pandemic response bills, including:
A4170, passed the Assembly and received in the Senate on August 3, would require public health authorities to abide by purpose and retention limitations (30 days) for contact tracing data, and if data is shared with any third parties, to publish the names of those entities online. Third parties would also be required to abide by the same obligations, with a civil penalty available of up to $10,000 collected by the Commissioner of Health. S 2539 is a companion bill.
California
In California, the legislature has generally prioritized pandemic related bills over other pieces of legislation such as amendments to the California Consumer Privacy Act (e.g., AB 3119 and AB 3212). However, some related privacy bills remain under consideration, such as other CCPA amendments (AB 1281 and AB 713), and a consumer genetics privacy bill (SB 980).In California, the final day for bills to be passed by the House or the Senate is August 31, 2020.
Active COVID-19 privacy bills in California include:
AB 685, which would require employers to notify its employees and state health departments of known or reasonably known exposures to COVID-19 within 24-hours.
AB 2004, which would establish a pilot program to expand the use of verifiable health credentials for communication of COVID-19 or other medical test results; and prohibit law enforcement agencies from requiring a patient to show such a credential.
On August 20, 2020, two additional noteworthy bills narrowly failed to progress out of the Senate Appropriations Committee, which would have regulated data related to established methods of contact tracing (AB 660) and digital contact tracing tools (AB 1782).
AB 660 would have required that data collected for the purpose of contact tracing could only be used, maintained, or disclosed to facilitate contact tracing efforts, and would have prohibited law enforcement from participating in contact tracing. AB 660 was opposed by local law enforcement, due to lack of clarity regarding how it intended to apply to law enforcement in the context of an employer-employee relationship. Concern was also raised about possible unintended consequences within prisons, which have experienced outbreaks. At a recent hearing, some legislators argued that California already has adequate legislation to protect individuals from law enforcement, such as the California Values Act of 2017 (SB54), which prevents state and local law enforcement agencies from using their resources on behalf of federal immigration enforcement agencies. However, numerous community organizations supported the bill, arguing that it would increase participation in contact tracing, thereby contributing towards more complete datasets and overall effectiveness.
AB 1782 would have comprehensively regulated digital contact tracing tools (“technology-assisted contact tracing”) offered by public health entities and businesses. AB 1782 would also have prohibited discrimination on the basis of participation in technology-assisted contact tracing. The scope of the bill resembles a bipartisan federal bill introduced by Senator Cantwell and Senator Cassidy to regulate exposure notification services.
Overall Trends
While most states, and the federal government, do not have a comprehensive baseline consumer privacy law that applies to all commercial uses of data, many existing federal and state laws do already apply to contact tracing efforts or to certain types of data (such as location data collected by cell phone carriers). For example, all states have unfair and deceptive practices (UDAP) laws and laws governing healthcare entities (supplementing HIPAA). Many states also have strong laws governing the confidentiality of state-held records, such as the California Confidentiality of Medical Information Act (Cal. Civil Code §§ 5656.37 [1992]), and the Uniform Health Care Information Act (National Conference, 988).
However, as states increasingly contract with private entities to provide digital tools in response to the pandemic, COVID-19 policy frameworks are developing to regulate new data flows across public and private sectors, often involving sensitive location and health information. Both the application of existing state privacy laws and the introduction of new laws to address the pandemic are likely to influence federal and state privacy debates for years to come.
Protected: Future of Privacy Forum’s 2020 Annual Meeting
Christy Harris Discusses Trends in Ad Tech
We’re talking to FPF senior policy experts about their work on important privacy issues. Today, Christy Harris, CIPP/US, Director of Technology and Privacy Research, is sharing her perspective on ad tech and privacy.
Prior to joining the FPF team, Christy spent almost 20 years at AOL, where she helped navigate novel consumer privacy issues in the development of internet staples such as AOL Mail, Advertising.com, MapQuest, and The Huffington Post. She also served as Privacy Program Manager at the cybersecurity company FireEye, Inc., where she implemented a vendor management program in preparation for the GDPR and worked to streamline the company’s global data practices.
Can you walk us through your career and how you became interested in privacy?
I worked at AOL for nearly 20 years, starting out by providing tech support in one of their call centers before moving up to AOL’s corporate headquarters. When I started working at AOL, the confidentiality of customer information was a core value, ingrained in everything we did, but online privacy as we know it today was barely a burgeoning field. Eventually, I moved to a position at AOL specifically focused on consumer advocacy, working with the Chief Trust Officer who oversaw a variety of consumer advocacy issues including anything related to privacy policies and the company’s data uses.
Eventually, AOL’s consumer advocacy team evolved into its global privacy team, led by its first official Chief Privacy Officer, Jules [Polonetsky], and the responsibility for protecting user privacy became a rapidly growing team, with broader responsibilities and the ability and authority to structure and encourage responsible company practices around user data. While Jules left AOL to launch FPF in 2009, AOL remained an FPF supporter, involved in various working groups and other FPF efforts over the years.
In 2017, I left AOL and spent some time working as an independent consultant for several tech companies. With the EU’s GDPR going into effect in 2018, many companies were scrambling to ensure they would be compliant on Day 1 of its enforcement, and CCPA (California’s newest privacy law) was swiftly on its heels, leading to much uncertainty for companies trying to determine their compliance obligations and the most efficient approaches, while also avoiding costly re-architecture of established systems and processes. I eventually joined FireEye full-time, working across teams to implement a vendor management process that included mechanisms for ensuring global compliance in light of the GDPR. Like many companies managing global operations, there was a strong desire to streamline processes and practices to provide consistency both for customers as well as internal operations.
During my time as a consultant, I also worked on an ad tech-related project for FPF, which eventually led to my current role as the Director of Technology and Privacy Research.
What projects are you working on at FPF related to ad tech and mobile platforms?
On a daily basis, I keep a close eye on how companies operate in the online advertising and ad tech space, drawing on my experiences at AOL and an understanding of the operations and needs of advertisers, publishers, and platforms. I also approach ad tech from a more technical perspective: evaluating how ad tech providers build and implement their technology, understanding how the systems operate and to track the flow of consumer data, as well as recognizing the needs and demands of publisher and advertisers leveraging the vast amount of data available in conjunction with the offerings and services of ad tech providers. All of this is part of an overall effort to reconcile how advertisers want to use consumer data with consumer expectations around the use of their data.
A key focus of my work is training – helping policymakers, brands, and privacy officers understand the details and mechanics of online data use so they can each be most effective in their roles. You can see some of our master class sessions online, and I and my colleagues are available for more tailored group sessions.
Earlier this year, we launched the International Digital Accountability Council (IDAC). After identifying the need for a third-party enforcement and accountability entity to address the gap between legislation and mobile platforms’ rules and requirements, we incubated the IDAC under the FPF umbrella. Today, the IDAC is an independent watchdog organization dedicated to ensuring a fair and trustworthy digital and application marketplace for consumers, encouraging companies to engage in responsible practices. I’m very proud of that effort and look forward to watching them continue to grow into a widely influential organization.
Balancing user expectations with industry standards is an interesting challenge. Consumers typically use an app because it will provide a specific service or allow them to achieve a specific goal — whether that’s managing a calendar, ordering dinner, or passing the time playing a fun game. App companies need to be clear about what it is they are providing to users, how they use and treat the data they collect, and ensure that any secondary or downstream uses of data are not unexpected or discriminatory (even if such uses ensure an app is free to use).
What do you see happening over the next few years in ad-tech?
Over the past few years, we’ve seen the GDPR have a very significant, global impact — despite ostensibly being a European law, the GDPR has influenced companies’ behaviors with respect to consumer data worldwide. We’re seeing other countries follow the example of the GDPR, working to establish privacy regulations informed by European law and its interpretations. I’ve found it fascinating to see how different cultural norms and expectations with respect to privacy have impacted national and state privacy laws, and it will be interesting to see how they continue to evolve. For example, where Europe recognizes privacy as a fundamental human right and approaches default data practices from that perspective, U.S. companies often rely on a system of notice and choice, requiring users to opt out of certain practices as the default. These differing perspectives are often reflected in how companies collect, use, and share consumer data, and have to be embraced and adapted to accommodate a globally accessible and targeted internet.
In the United States, we’ve seen California enact regulations embracing approaches similar to the GDPR from a perspective that reflects U.S. cultural norms. I expect California to serve as a driver of additional privacy legislation and the evolution of default approaches in the United States, but I also expect to see changes coming from the ad tech providers themselves as well as the companies leveraging their services. Companies with the power to determine what can be done within their respective environments, for example Google and Apple in the mobile platform context, often drive a significant portion of the policy standards and discussions today. When a company controls the platforms and technologies on it that may be used to interact with consumers, a seemingly minor change on that platform can cause a ripple effect felt across the ecosystem.
Ultimately, I think the push-pull between advertisers and the platforms on which they reach consumers will continue. The brands and advertisers themselves may not always be technical experts, but these organizations excel at finding creative ways to reach their goals. This is where we end up in a “whack-a-mole” situation – platforms’ goals may not align with those of the advertisers on their platforms, creating a constant balancing act. FPF’s role and perspective as data optimists allows us to bring together a variety of stakeholders and experts to help achieve the various goals, using data in new and innovative ways, while always respecting the users at the core of the conversation.
Congrats to National Student Clearinghouse
National Student Clearinghouse’s StudentTracker for High Schools Earns iKeepSafe FERPA Badge
July 22, 2015 – iKeepSafe.org, a leading digital safety and privacy nonprofit, announced today that it has awarded its first privacy protection badge to StudentTrackerSM for High Schools from the National Student Clearinghouse, the largest provider of electronic student record exchanges in the U.S. Its selection as the first recipient of the new badge reflects the ongoing efforts of the Clearinghouse, which performs more than one billion secure electronic student data transactions each year, to protect student data privacy.
A nonprofit organization founded by the higher education community in 1993, the Clearinghouse provides educational reporting, verification, and research services to more than 3,600 colleges and universities and more than 9,000 high schools. Its services are also used by school districts and state education offices nationwide.
Earlier this year, iKeepSafe launched the first independent assessment program for the Family Educational Rights and Privacy Act (FERPA) to help educators and parents identify edtech services and tools that protect student data privacy.
“The National Student Clearinghouse is as committed to K12 learners as we are to those pursuing postsecondary education, and that also means we’re committed to protecting their data and educational records,” said Ricardo Torres, President and CEO of the Clearinghouse. “So many aspects of education are moving into the digital realm, and we’re focused on providing students with the privacy and protection they deserve in a rapidly changing digital environment.”
The Clearinghouse became the first organization to receive the iKeepSafe FERPA badge by completing a rigorous assessment of its StudentTrackerSM for High Schools product, privacy policy and practices. “As the first company to earn the iKeepSafe FERPA badge, the National Student Clearinghouse has demonstrated its dedication to K12 students and their families, and to the privacy and security of their data,” said iKeepSafe CEO Marsali Hancock.
Products participating in the iKeepSafe FERPA assessment must undergo annual re-evaluation to continue displaying the iKeepSafe FERPA badge. For the evaluation, an independent privacy expert reviewed the StudentTrackerSM for High Schools product, its privacy policy and practices, as well as its data security practices.