FTC Requires Algorithmic Disgorgement as a COPPA Remedy for First Time

On March 4, the Federal Trade Commission (FTC) and Department of Justice (DOJ) announced a settlement agreement with WW International and its subsidiary, Kurbo (Kurbo by WW), after charging the companies with violating the Children’s Online Privacy Protection Act (COPPA) for improperly collecting health information and other data from children as young as eight years old. Among other penalties, the settlement requires the deletion of all “affected work product”–which includes algorithms–resulting from the companies’ collection of children’s data. Significantly, this is the first time that the FTC has imposed an algorithmic disgorgement penalty in a COPPA enforcement action, a measure that reflects the Commission’s increasing focus on algorithmic fairness.

The COPPA Claim

Aimed at protecting children under 13, COPPA applies to online service providers and commercial websites that (1) are directed to children and collect, use, or disclose children’s personal information; (2) have actual knowledge that children use the provider’s online service; or (3) provide a third party service to websites or online service providers that collect information from children. Among other requirements, services subject to COPPA must:

  1. Give parental notice before collecting, using, or disclosing child data;
  2. Make reasonable efforts to ensure parents receive direct notice of the collection, use, or disclosure of child data; 
  3. Obtain verifiable parental consent (VPC) before the collection, use, or disclosure of child data; and 
  4. Retain child data no longer than necessary to further the purpose for which the provider collected the information.

Here, the FTC and DOJ alleged that Kurbo by WW, a service marketed to children under 13, failed to provide adequate notice and obtain VPC before collecting personal information including weight, height, food intake, and physical activity. Specifically, the agencies argued that the measures Kurbo by WW did take, such as an age gate, were insufficient under the rule, and even incentivized children to lie about their birthdate to circumvent the measures. Moreover, the agencies alleged Kurbo by WW retained child data indefinitely, and would only delete upon parent request. The settlement imposes multiple remedies, including injunctive relief, monetary fines, engaging in compliance reporting, and, significantly, a requirement that Kurbo by WW delete all work product resulting from the collection of children’s personal information.

The Significance of This Settlement

The significant part of this settlement is the algorithmic disgorgement penalty: the requirement that the companies delete all algorithms resulting–in part or in whole–from the inappropriate collection of children’s data. The FTC imposed this penalty for the first time in 2019 in a final order against Cambridge Analytica. The agency used the remedy again in the 2021 Everalbum settlement, in which the developers of a photo app were required to delete facial recognition algorithms developed through training on data that was improperly collected. In a significant next step, this is the first time we have seen the agency impose the penalty in a COPPA settlement. Like monetary fines, compliance reporting, and other injunctive relief, algorithmic disgorgement is a measure intended to deter companies from improperly collecting and retaining child data. However, the penalty goes a step further than other COPPA remedies by preventing companies from benefiting from the improperly collected data in the future. In the FTC’s press release for the settlement, FTC Chair Lina Khan remarked, “Our order against these companies requires them to delete their ill-gotten data, destroy any algorithms derived from it, and pay a penalty for their lawbreaking.” This strong language from the FTC Chair signals an interest in doing more to hold companies subject to COPPA accountable.

Recently, child privacy has become a trending topic for both policymakers and enforcement agencies. Historically, the FTC tends to bring only a few COPPA cases per year, but the Kurbo by WW settlement marks the FTC’s second COPPA settlement in just three months. Time will tell whether COPPA enforcement actions become more frequent in the wake of increasing calls to protect children’s privacy. Regardless, this settlement stands to impact future COPPA enforcement by setting a new precedent for the penalties the FTC is willing to impose on companies. It also raises important questions about how companies can obtain effective VPC, an issue FPF’s Youth & Education team is exploring in our report on The State of Play: Verifiable Parental Consent and COPPA. Companies with child audiences should pay close attention to this settlement and its penalties, and ensure their practices are complying with COPPA.

Additional Resources:

FTC Blog Post on the Kurbo by WW settlement for Businesses 

For more on COPPA and VPC, see FPF’s Work on Verifiable Parental Consent (VPC) at thestateofplay.org

Future of Privacy Forum Statement on Ukraine

The Future of Privacy Forum is heartbroken about the horrific events unfolding in Ukraine. We stand with the people of Ukraine. FPF will contribute to José Andrés’s World Central Kitchen, serving thousands of meals to Ukrainian families. FPF will also match any donations made by our staff to WCK or another nonprofit organization of their choice to support Ukraine. 

flying,bird,as,a,symbol,of,peace.,support,ukraine.,no

The Significance of Inclusion in Clinical Trials and Medical Research Databases

Our colleagues at the Israel Tech Policy Institute (ITPI) published a thoughtful blog on the significance of diversity and inclusion in clinical trials and health and medical research databases.

They discuss the imperative of being represented in data, for one’s existence to be recognized and considered. When such data is the building block for a cure, therapy, and wellness development – representation carries consequences for one’s health prospects. Accordingly, the absence of clinical data and health datasets used for health and medical research entails a lack of representativeness and a lack of diversity in research participants. This is known to have medical and social effects on individuals and communities alike.

The diversity of populations in developed countries (where most medical research is being conducted) that came with global migration movements and the resulting demographic changes, is not faithfully reflected in the composition of participants in clinical trials and in biomedical databases. To date, the majority of participants in clinical trials and medical databases are Caucasians – mostly males of European descent. It is estimated that 78% of the genetic and genomic information available today originates from this population, although the overall proportion of Europeans and their descendants in the world population is barely 16%.

You can read the full analysis on the ITPI blog.

Utah Consumer Privacy Act Passes State Legislature

This week, the Utah legislature passed the Utah Consumer Privacy Act (SB 227). If enacted by Governor Spencer Cox, Utah will follow California, Virginia, and Colorado as the fourth U.S. state to establish a baseline regime for the protection of personal data. The law would come into effect in December 2023.

“While the Utah Consumer Privacy Act would create some new rights for Utah residents, it contains significantly fewer privacy protections than leading state frameworks. A national comprehensive law that sets strong baseline standards will be the only way to ensure that geography doesn’t determine individuals’ basic privacy rights.”

Statement by Keir Lamont, Senior Counsel, Future of Privacy Forum

The Utah Consumer Privacy Act shares a similar structural framework for protecting personal information as legislation enacted in Virginia and Colorado. As such, it would be unlikely to introduce significant new compliance challenges for businesses that are already preparing for those laws, which come into effect in 2023. 

However, Utah’s law would set significantly narrower individual rights and business obligations than privacy regimes enacted in other states.

The Utah Consumer Privacy Act is poised to secure some important new protections for Utah residents, such as access and deletion of certain personal information. However, given its limitations, the Act would not meaningfully advance individual privacy interests relative to approaches taken in other jurisdictions. The ultimate significance of the Utah Consumer Privacy Act may be that it represents an overall trend of U.S. states toward adopting privacy frameworks that are based upon the Virginia and Colorado laws, rather than following the lead of California.

Media Inquiries: [email protected]

Brussels Privacy Symposium 2021 Report

On November 16, 2021, the Future of Privacy Forum (FPF) and the Brussels Privacy Hub of Vrije Universiteit Brussel (VUB) hosted the Brussels Privacy Symposium 2021 – The Age of AI Regulation: Global Strategic Directions. The event, convened by Jules Polonetsky, CEO of FPF, Christopher Kuner and Gianclaudio Malgieri, Co-Chairs of the Brussels Privacy Hub (BPH), brought together policymakers, academic researchers, civil society organizations and industry leaders from the European Union (EU), the Organization for Economic Cooperation and Development (OECD), the United States, Brazil, and Singapore to discuss the most recent trends in the governance of Artificial Intelligence (AI), with a focus on addressing the risks posed by AI systems to fundamental rights, while fostering their responsible development and uptake. A new report from FPF’s Sebastião Barros Vale, Katerina Demetzou and Lee Matheson summarizes and offers context to the discussions at the event.

The 2021 Brussels Privacy Symposium was the fifth-annual academic program jointly presented by the BPH and FPF. In this context, the Symposium’s panelists debated the proposal for a legal framework that the European Commission (EC) published in April 2021 (AI Act), a first-of-its-kind comprehensive law for AI systems, which comprises a risk-based approach by scaling legal obligations to the severity of risks that specific AI systems pose. Furthermore, speakers drew comparisons between the proposed EU model and different approaches to AI regulation that are surfacing elsewhere – such as the US, Brazil, Singapore, and China. 

The keynote panel, which covered the EU’s road ahead to the proposed AI Act and was moderated by Gianclaudio Malgieri, BPH Co-Director and Associate Professor of Law at EDHEC Augmented Law Institute (Lille), counted on:

The following panel saw a Global Comparative Discussion on Approaches to AI Regulation, Governance and Oversight, moderated by Dr. Gabriela Zanfir-Fortuna, Vice President for Global Privacy at FPF and Affiliated Researcher at the VUB’s Research Group on Law, Science, Technology & Society (LSTS). Speakers included:

The last panel was titled Should Certain Uses of AI Be Banned?, and it was moderated by Ivana Bartoletti, Global Chief Privacy Officer at Wipro and Co-Founder of the Women Leading in AI Network. Speakers included:

To learn more, read the report.

If you have any questions about the Report, contact Dr. Gabriela Zanfir-Fortuna at [email protected] or Dr. Rob van Eijk at [email protected].

Privacy Harms, Global Privacy Regulation, and Algorithmic Decision Making are Major Topics During Privacy Papers for Policymakers Event

For the 12th year, the Future of Privacy Forum (FPF) hosted its Privacy Papers for Policymakers event, honoring the 2021 Privacy Papers for Policymakers Award winners. This year’s event featured an opening keynote by Colorado Attorney General Phil Weiser and facilitated discussions between the winning authors – Daniel Solove, Ben Green, Woody Hartzog, Neil Richards, Joris van Hoboken, Ronan Ó Fathaigh, Jie Wang, Shikun Zhang, and Norman Sadeh – and leaders from the academic, industry, and policy landscape, including Maneesha Mithal, Sarah Holland, Travis Hall, Quentin Palfrey, Dr. Clarisse Girot, and John Howard, Ph.D. 

In his keynote, AG Weiser outlined his approach for fostering conversations in the privacy space that bring together policymakers and academics while ensuring the integrity of the discussions, an approach Weiser called the “true north” of his career. Weiser spoke to the lack of dialogue within Congress and offered examples of how his home state of Colorado has facilitated productive conversations at the state level around data privacy. Weiser pointed to the recently passed Colorado Privacy Act as a testament to how bipartisanship is “still alive and well at the state level.”

AG Weiser stated that states considering privacy legislation must bring together “those who are practicing on the ground as well as those who are very gifted scholars.” With so many entities in the field, it is challenging to utilize a one size fits all solution or approach. Weiser noted, “we want to create a regulatory regime that is adaptable, and that can both protect data and consumers’ privacy while not getting in the way of innovation.” Through respectful and thoughtful collaboration, advances in data protection, security, and privacy can be achieved at the state and federal levels.

Weiser stressed the importance of collaboration and respect in conversations around privacy. He highlighted the Ginsburg/Scalia Initiative, a bi-partisan gathering of state AGs honoring the friendship of the two late Supreme Court Justices, which convenes to engage in dialogue to solve pressing issues. Weiser concluded his keynote by congratulating FPF on creating an event that followed in the spirit of Justices Scalia and Ginsburg. FPF’s PPPM event encourages all attendees to “think differently, to take different sorts of thoughts seriously, and to look at issues from different angles.”

ydpmi2aw
Colorado Attorney General Phil Weiser

Following Attorney General Weiser’s keynote address, the event shifted to moderated discussions between the authors and leaders from the academic, industry, and policy communities. Click the links below to read each of the winning papers, or read the 2021 PPPM Digest, which includes summaries of the papers and more information about the authors and judges.

Daniel Solove kicked off the discussion section of the event by talking about his paper, Privacy Harms, with Maneesha Mithal, Cybersecurity Partner at Wilson Sonsini. This paper, co-authored by UVA School of Law Professor Danielle Citron, analyzed how courts define harm in cases involving privacy violations and how the requirement of proof of harm has impeded the enforcement of privacy law due to the dispersed and minor effects that most privacy violations have on individuals. “We think that harm should only be required when the goal is compensating people,” said Daniel Solove. “When the goal is deterrence, really the harm shouldn’t matter. The goal should be what’s the most effective deterrence.”

screen shot 2022 02 10 at 1.24.55 pm
Daniel Solove and Maneesha Mithal

Next, Woody Hartzog, Northeastern University School of Law and Khoury College of Computer Sciences, Stanford Law School Center for Internet and Society; and Neil M. Richards, Washington University School of Law, Yale Information Society Project, Stanford Center for Internet and Society discussed their paper, The Surprising Virtues of Data Loyalty. The authors were joined by Sarah Holland, Public Policy Manager at Google. Professors Hartzog and Richards’ paper looked into criticisms of data loyalty, arguing that the concept of data loyalty has some surprising virtues, including checking power and limiting systemic abuse by data collectors. “We think that data loyalty actually gets you something that existing law does not. We think it’s able to cover a lot of new problems,” said Woody Hartzog. “We think that data loyalty is a way to firm up existing obligations.”

fuhjkmeq
Woody Hartzog, Neil M. Richards, and Sarah Holland

Next, Ben Green, the University of Michigan at Ann Arbor, Gerald Ford School of Public Policy, Harvard University, Berkman Klein Center for Internet & Society, discussed his paper, The Flaws of Policies Requiring Human Oversight of Government Algorithms, with Travis Hall, Telecommunications Policy Analyst at the National Telecommunications and Information Administration (NTIA). His paper analyzed the use of human oversight of government algorithmic decisions and concluded that humans could not perform many of the desired oversight responsibilities. He argued that by continuing to use human oversight as a check on these algorithms, the government legitimizes the use of faulty algorithms without addressing the associated issues. “The vast majority of evidence shows that people are incapable of reliably performing exactly the roles that these policies are calling for. The problem is the regulation doesn’t actually address the underlying harm,” said Ben Green. “I think that gets us into this really gnarly situation where we have a false sense of security, that these algorithms are appropriate and legitimate to use, when in fact, the underlying concerns haven’t actually been resolved.”

screen shot 2022 02 10 at 2.02.10 pm
Ben Green and Travis Hall

The next paper discussed was Smartphone Platforms as Privacy Regulators by Joris van Hoboken, Vrije Universiteit Brussels, Institute for Information Law, University of Amsterdam; and Ronan Ó Fathaigh, Institute for Information Law, University of Amsterdam. The authors were joined by Quentin Palfrey, President of the International Digital Accountability Council. The paper analyzed the role of online platforms and their impact on data privacy in today’s digital economy before providing an argument as to what platforms’ role should be in legal frameworks. “What we try to do is to build a disclosure model around the regulatory behavior that these [smartphone] platforms are engaging in,” said Ronan Ó Fathaigh. “We don’t make the claim that platforms are engaging in behavior that is anti-competitive, but there are a lot of different commentators that are making those allegations, and certain app companies are making allegations that privacy is being used as a tool in anti-competitive behavior. We give the platforms the benefit of the doubt.”

screen shot 2022 02 23 at 3.36.36 pm
Joris van Hoboken, Ronan Ó Fathaigh, and Quentin Palfrey

Jie (Jackie) Wang, W&W International Legal Team, Kinding Partners, spoke next on her paper, Comparison of Various Compliance Points of Data Protection Laws in Ten Countries/Regions, with Dr. Clarisse Girot, Managing Director for Asia Pacific at the Future of Privacy Forum. Her paper compares China’s Personal Information Protection Law (PIPL) with data protection laws in nine regions to assist overseas Internet companies and personnel to better understand the similarities and differences in data protection and compliance between each country and region. “Helping ensure personal data compliance is part of my daily work, ” said Wang. “The best way to learn the PIPL is to digest it by writing an in-depth analysis of it.”

screen shot 2022 02 10 at 2.33.46 pm
Jie (Jackie) Wang and Dr. Clarisse Girot

Shikun (Aerin) Zhang and Norman Sadeh, Carnegie Mellon University, closed the event discussing their paper, co-authored by Yuanyuan Feng, University of Vermont; Lujo Bauer, Carnegie Mellon University; Lorrie Faith Cranor, Carnegie Mellon University; and Anupam Das, North Carolina State University, “Did you know this camera tracks your mood?”: Understanding Privacy Expectations and Preferences in the Age of Video Analytics. Shikun Zhang and Norman Sadeh were joined by Dr. John J. Howard, Principal Data Scientist at Maryland Test Facility. The paper seeks to determine how individuals should be notified that they are being recorded by studying 123 individuals’ sentiments across 2,328 video analytics deployments scenarios. “People often don’t realize that many of these cameras are connected to video analytic capabilities,” said Professor Sadeh. “We believe that there’s really a need to better understand how people feel about these very diverse scenarios as they’re emerging today, and using that to inform the design idea as mechanisms to notify people and to give them, ideally, the ability to exercise those rights that, in principle, are now being made available to them.”

screen shot 2022 02 10 at 2.58.54 pm
Shikun (Aerin) Zhang, Norman Sadeh, and Dr. John J. Howard

Thank you to Attorney General Weiser and Honorary Co-Hosts Senator Edward Markey and Congresswoman Diana DeGette for their support and work around this event. We would also like to thank our winning authors, discussants, everyone who submitted papers, and event attendees for their thought-provoking work and support. Learn more about the event on the FPF website and watch a recording of the event on the FPF YouTube channel.

New FPF Report: Demystifying Data Localization in China – A Practical Guide

On February 21, 2022, FPF published a report detailing China’s data governance framework for data localization and cross-border transfers. The report outlines 10 steps organizations can take before deciding to localize or transfer data, with practical advice on how to carry out each of them. By examining provisions of relevant laws and administrative regulations passed by ministerial departments, it aims to give organizations a better understanding of how the transfers framework operates, the expectations of Chinese regulatory authorities with respect to such transfers, and the specific steps controllers can take for better compliance mapping. It is important to note that this report does not contain legal advice.

While the new data protection and data security legal framework solidified and added to pre-existing data localization requirements, it also clarified that data can be transferred or made accessible outside of China if specific conditions are met.

Under Chinese law, data localization is only required in certain circumstances framed around two distinct conceptual pillars: (1) which entity is processing the data; and 2) what type of data is being processed. With respect to the first pillar, certain special categories of controllers must store their data in China due to their importance to China’s national security and economy, and may only transfer data with the approval of regulatory authorities. For the second, controllers must store “important data” in China, and receive approval before transferring such data abroad. 

In other circumstances, controllers do not need to store data locally in China but must comply with other transfer requirements. Article 38 of the Personal Information Protection Law (PIPL) sets forth these conditions for lawfully transferring data. Once a controller chooses a transfer mechanism, it must comply with additional transparency obligations. However, it is important to take both the PIPL and the Data Security Law (DSL) requirements into account when deciding whether to localize data or to transfer it. 

In order to untangle this complex legal landscape, this Report proposes 10 steps that data controllers can take before deciding to localize or transfer data, with practical advice on how to carry them out:

Step 1 – Determine scope and when data is “transferred” overseas 

Step 2 – Evaluate the type of data controller and whether it is a critical information infrastructure operator (CIIO) or a special controller 

Step 3 – Determine the type of data to be transferred including whether it is important data

Step 4 – Evaluate whether a security assessment by the CAC is required 

Step 5 – Determine whether a cybersecurity review is mandatory

Step 6 – Determine if an exception applies 

Step 7 – Choose the transfer mechanism 

Step 8 – Check whether an international treaty or agreement is applicable 

Step 9 – Obligations for Entrusted Processors (委托处理)

Step 10 (bonus) – Determine whether the transfer is compelled by a foreign judicial or law enforcement body

The Report also contains an annexed Flowchart with a summary of the 10 steps.

BCI Technical and Policy Recommendations to Mitigate Privacy Risks

fpf bci report socialgraphics 1200x630 v2

This is the final post of a four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.

Click here for FPF and IBM’s full report: Privacy and the Connected Mind. In case you missed them, read the first, second, and third blog posts in this series. The first post unpacks BCI technology. The second and third posts analyze BCI applications in healthcare and wellness, commercial, and government, the risks associated with these applications, and the implicated legal regimes. Additionally, FPF-curated resources, including policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are here.

I. Introduction: What are BCIs?

BCIs are computer-based systems that directly record, process, or analyze brain-specific neurodata and translate these data into outputs. Those outputs can be used as visualizations or aggregates for interpretation and reporting purposes and/or as commands to control external interfaces, influence behaviors, or modulate neural activity. BCIs can be broadly divided into three categories: 1) those that record brain activity; 2) those that modulate brain activity; or 3) those that do both, also called bi-directional BCIs (BBCIs). 

BCIs can be invasive or non-invasive and employ a number of techniques for collecting neurodata and modulating neural signals. Neurodata is data generated by the nervous system, which consists of the electrical activities between neurons or proxies of this activity. This neurodata may be “personal neurodata” if it is reasonably linkable to an individual.

II. Stakeholders Should Adopt Both Technical and Policy Guardrails to Promote Privacy and Responsible Use of BCIs

From healthcare to smart cities, BCI-facilitated data flows can augment society by improving operations and offering novel insights into long-term problems. However, this nascent technology also creates privacy risks and raises other concerns. As BCIs spread to new realms of activity, existing accountability and enforcement structures may not respond to the challenges raised by these novel BCI applications. Some regulators have already reacted to these perceived inadequacies by creating and reforming policy and legal frameworks. To promote privacy and responsible BCI use, novel technical and policy approaches may also be required to mitigate against potential risks.

A. Technical Recommendations

Providing On/Off and App Controls to Users: Privacy risks arise when a BCI device continuously collects data or is unintentionally switched on. These features may prevent users from exercising control over personal neurodata, because they are unaware that the collection is occurring in the first place. On/off and granular controls on devices and in companion apps can mitigate against these privacy risks by enhancing a user’s ability to manage neurodata flows. 

End-to-End Encryption of Sensitive Neurodata and Privacy Enhancing Technologies: Developers should explore a variety of measures to promote privacy and protect neurodata during collection and processing. End-to-end encryption can be used to protect sensitive personal neurodata in transit and at rest. Privacy enhancing technologies (PETs) such as differential privacy and de-identification methods—Privacy Preserving Data Publishing (PPDP) for stored and shared data, to name one—can also help BCI developers maximize neurodata’s utility while protecting the identity of the person to whom the neurodata belongs.

B. Policy Recommendations

Rethinking Transparency and Control: A BCI’s technological capabilities, purposes, and user bases will impact the privacy risks these devices pose, and they may shift with changes in context. These variations will inform the appropriate levels and methods of transparency required to encourage informed consent and provide insights into device capabilities, data flows, data storage, and who controls and has access to the data. 

Developers and regulators should therefore identify measures facilitating a level of transparency that both gives users meaningful control over personal neurodata and reflects a particular BCI application’s privacy risks. While privacy policies and similar documents are often required by law, these policies frequently fail to provide sufficient levels of transparency. Even if the document’s contents are accurate, users may not read them or, if they do, may still find it challenging to understand what is happening with their data. On-device indicators could be marshaled to ameliorate this notice problem; visual or audio indicators may improve transparency and control by informing users when neurodata collection or modulation occurs.

Institutional Review Boards, Ethical Review Boards, and Multi-Stakeholder Engagement: Collecting neurodata and deploying BCI technology may require review and/or approval. BCI providers that are gathering primary research data from human subjects or pre-registering clinical trials may need to complete an institutional review board (IRB) review. Other organizations may need to obtain approval from bodies, such as the Food and Drug Administration (FDA), before selling a BCI product. However, many consumer-facing BCIs are not subject to these requirements. Providers of consumer-facing BCIs that want to have strong privacy protections can still subject these BCIs to ethical review board (ERB) oversight. ERBs can consider questions, including those relating to neurodata collection, use, access—when neurodata is sought for research purposes, but obtaining user consent is impractical, for instance—and storage.

When appropriate, organizations developing BCIs should also facilitate multi-stakeholder engagement during the BCI’s development and deployment lifecycle. The consultations should consist of those affected BCIs, and not just researchers, policymakers, and initial adopters. Individuals who are impacted by BCIs include people from marginalized communities, such as the disabled and historically-surveilled populations. BCI developers should actively seek out and incorporate these communities’ feedback into product development and deployment decisions. Developers should also recognize that a product may need to be heavily altered or scrapped to respect community input or avoid harm.

Standards Setting and Other Agreements: Companies, research institutions, and policymakers should set policy and technical standards for BCI research, development, and use that can adapt to changes in the technology, user base, and applications. Some of these standards may be taken from existing policy frameworks, but the unique risks posed by BCIs may require novel approaches, too. As previous blog posts discuss, there is no consensus on the types of neurodata that can or will be interpreted as biometric data under current laws. This impacts whether some regulations apply to neurodata, resulting in categories of data such as Brittan Heller’s “biometric psychography” potentially lying outside any law. Policymakers may therefore need to re-evaluate conceptions of biometrics to account for BCI applications. Alongside technical and policy standards, industry and regulators should promote up-to-date training for developers around processes such as data handling and de-identification learned from academia.

Open Neurodata Standards and Open Licenses for De-Identified Data: There are large barriers affecting the deployment of BCIs due to the high cost of research and development. Proprietary systems may hinder the exchange of best practices and tools that are needed to fuel a thriving research and development environment. To prevent stagnation, stakeholders should collaborate to develop and adopt open neurodata standards and also consider whether using open licenses for de-identified neurodata research sets is possible and appropriate.

III. Conclusion: Balancing New Data Flows Against BCI Privacy Risks

As BCIs evolve and become more available across numerous sectors, stakeholders must understand the unique risks these technologies present. Key to this understanding is an assessment of how these technologies work and what data is necessary for them to function, as many risks attributed to BCI applications flow from these devices processing certain data.

The adoption of technical and policy recommendations that can make BCI data less identifiable, less potentially harmful, and more secure could minimize privacy and data governance risks. However, the evolution of BCIs will require developers, researchers, and policymakers to differentiate between the risks that exist now and those that may emerge in the future. Only though this careful assessment can stakeholders identify the issues that require immediate attention versus those that need proactive solutions. 

BCIs will also likely augment and be combined with many existing technologies that are currently on the market. This means that new technical and ethical issues are likely to arise and existing issues could be compounded by one another. In the near future, BCI providers, neuroscience and neuroethics experts, policymakers, and societal stakeholders will need to come together to consider what constitutes high-risk use in the field and make informed decisions around whether certain BCI applications should be prohibited, a position around which more robust and critical discussion is needed. 

Finally, and perhaps more fundamentally, it is also possible that the future of privacy itself and our notions of what it means to have or obtain privacy at basic human or societal levels could be challenged in ways that we cannot currently comprehend or anticipate. We hope this report and our ongoing work helps support the technical, legal, and policy developments that will be required to ensure the advances in this sector are implemented in ways that benefit society.

How the Kenyan High Court (temporarily) struck down the national digital ID Card: Context and Analysis

The High Court of Kenya, by virtue of a judicial review application, delivered a landmark judgment declaring the proposed national digital ID card (Huduma Card) unconstitutional on October 14, 2021 – a judgment that is now part of the growing data protection and privacy jurisprudence in the country. 

Kenya enacted its first Data Protection Act (KDPA) in 2019, as part of a growing wave of privacy and data protection laws being adopted across African jurisdictions. While discussions of data protection and privacy in Africa are still at their infancy stage, they are constantly developing. Cape Verde was the first country to enact a data protection law in 2001. Countries such as Zimbabwe enacted their data protection law as recently as December 2021. This blog analyzes the landmark judgment of the High Court of Kenya in the Huduma Card case, putting it in context with regard to broader privacy and data protection law developments in the country and the continent. 

1. Background of the case and brief history

The matter, Republic v Joe Mucheru, Cabinet Secretary Ministry of Information Communication and Technology and others ex parte Katiba Institute and Yash Pal Ghai concerned the process of launching the “Huduma Card”, Kenya’s proposed first national digital ID card. According to the applicants, Katiba Institute, a constitutional research, policy and litigation institute in Kenya, and Yash Pal Ghai, a ‘data subject’ as defined by the KDPA, the process of launching the Huduma Card was done in violation of the KDPA.

Specifically, they argued that the executive order adopted on November 18, 2020 by the country’s Ministry of Interior, the body in charge of rolling out Huduma Cards to registered persons, violated section 31 of the KDPA. Section 31 provides that “where a processing operation is likely to result in high risk to the rights and freedoms of a data subject, by virtue of its nature, scope, context and purposes, a data controller or data processor shall, prior to the processing, carry out a data protection impact assessment”. The KDPA describes processing as “any operation or sets of operations which are performed on personal data or on sets of personal data whether or not by automated means”. It includes activities such as:

On November 24, 2020, the applicants filed for judicial review of the executive order launching the Huduma Card. In the motion, the applicants asked the court to grant three orders:

  1. To prohibit the rolling out of Huduma Cards.
  2. To reverse the decision to roll out Huduma Cards.
  3. To issue an order compelling the respondents to conduct a data protection impact assessment before processing of data and rolling out Huduma Cards. 

The court granted the last two orders.

2. Putting the Huduma Card into Context

For purposes of clarity, it is fundamental to locate Huduma Card in the larger context within which it exists. Huduma Card, akin to India’s Aadhaar Card, is the final step in the process of registration in Kenya’s proposed digital identification system – the National Identity Integrated Management System (NIIMS). NIIMS was introduced through the Statute (Miscellaneous Amendments) Act, No. 18 of 2018  which amended Kenya’s civil registration law, the Registration of Persons Act (RPA) in 2018. The amendment involved introduction of a new section, section 9A that established NIIMS.

On January 18, 2019, the RPA amendment came into force. Pursuant to the introduction of NIIMS, the government began a nationwide exercise of collection of personal data including biometric data on March 15, 2019. Soon after, the legal validity of NIIMS and its subsequent implementation were challenged before the High Court. One of the grounds for challenging the implementation included that, in its original state, NIIMS would pose a threat to rights and freedoms protected under the Constitution. Specific to the right to privacy guaranteed under article 31 of the Constitution, issues raised by the different petitioners included the fact that:

On January 30, 2020, the High Court rendered a decision on this petition. It held that:

  1. Implementation of NIIMS would proceed. Processing and use of data collected in NIIMS would proceed on the condition that an appropriate and comprehensive regulatory framework on the implementation of NIIMS that is compliant with the applicable Constitution requirements as identified in the judgment is first enacted.
  2. Collection of DNA and GPS coordinates  was found to be intrusive and unnecessary as it violated the right to privacy under the Constitution.

While the above petition was pending determination, the KDPA was enacted and became applicable in November 2019. The court directed that processing of data collected under NIIMS should not happen before the KDPA is operationalized and a regulatory framework put in place. The KDPA is now in operation with the creation of the Office of the Data Protection Commissioner.

In October 2020, the government published two regulations specifically for NIIMS; Registration of Persons (National Integrated Identity Management System) Rules (2020) and the Data Protection (Civil Registration) Regulations. The former recognizes NIIMS as the primary source of identification in Kenya while the latter creates a legitimate basis for processing NIIMS data. The Huduma Bill, a comprehensive national digital ID law was also proposed as another regulation measure to guide the implementation of NIIMS. Therefore, protection of data collected under NIIMS is presently governed by the Constitution of Kenya, the KDPA, the Registration of Persons Act, the Registration of Persons (National Integrated Identity Management System) Rules (2020), and Data Protection (Civil Registration) Regulations. It is under these circumstances that the Ministry of Interior through an executive order announced the rollout of the Huduma Card, which led to the judicial review before the High Court.

3. Understanding Kenya’s Automated Processing of Personal Data Ecosystem

Before delving into the impact of this recent decision, a brief overview of Kenya’s automated processing of personal data ecosystem is necessary. From a consumer perspective, Kenya’s internet connectivity is growing. As of January 2021, it was reported to be at 40%. This has created a market for internet supported applications such as digital finance applications, and social media applications among many others. Most of these applications collect personal data in the course of usage or require personal data to operate. Of particular interest is the proliferation of digital finance applications in Kenya. This has created a market for more sophisticated, personal data reliant digital finance applications. A number of these financial service providers rely on alternative scoring models to provide credit. Many of these models rely on highly personal data to determine loan eligibility. Some applications require constant permission to location data while another requires access to the microphone.

At the government level, the concept of digitization of information systems is close to the heart of the Kenyan government as seen in large scale projects such as NIIMS and the National IT policies. Other government maintained information systems that contain personal data include the biometric voter registration system, electronic voter identification system and health information systems in public health facilities. In a bid to conduct these data processing activities, some data controllers rely on third party data processors to conduct processing activities. A good example is Kenya’s election management body which outsources election kits and the systems used to run them.

All these commercial and government systems hold personal data that now fall within the scope of the KDPA. It is for these reasons that a landmark decision on the enforcement of the KDPA bears relevance.

4. Key Issues for Analysis in the Huduma Card Case

The questions of whether to conduct a data protection impact assessment (DPIA) or not as well as the procedure of handling complaints are key issues in the judgment that have the ability of influencing data protection expectations for both data subjects and data controllers/processors handling personal data that falls under the KDPA’s scope. 

4.1 Conducting a DPIA

In the judicial review application, Katiba Institute (the applicant) submitted that the respondents did not conduct a DPIA which was in violation of the KDPA and Order III of the 2020 petition. In rebutting this, the respondents argued that the KDPA was not envisaged to apply to data under NIIMS. The court upheld the applicant’s arguments and ordered for a DPIA to be conducted before any further steps to issue Huduma Cards are undertaken. This decision was upheld, partly due to the fact that some of the parties in the 2020 petition who were also respondents in this matter, submitted to the court that there were legal safeguards underway to ensure protection of data under NIIMS. The legal safeguard, in this case, was the Data Protection Bill, now the KDPA. The court, therefore, did not see why the Bill which is now law should not apply in the present matter.

The question of whether or not to conduct a DPIA remains a subjective one, at least for civil registration entities. The Data Protection (Civil Registration) Regulations, adopted in the implementation of the KDPA but only relating to public bodies with a civil registration function, do not state whether it is mandatory to conduct a DPIA for information systems held by civil registration entities such as NIIMS and its components. Under Regulation 19, it provides that a data protection impact assessment may be conducted on condition it is required in accordance with section 31 of the KDPA. On the other hand, Section 31(6) of KDPA provides that: “The Data Commissioner shall set out guidelines for carrying out an impact assessment under this section”. While indeed the Data Commissioner did develop the Data Protection (General) Regulations, 2021 that attempts to set the criteria of conducting a DPIA by delineating processing activities that would amount to “high risk, the Regulations are not applicable to civil registration entities where NIIMS falls under. Regulation 3 of the General Regulations provides that: “These Regulations shall not apply to civil registration entities specified under the Data Protection (Civil Registration) Regulations, 2020”.

Interestingly, the High Court did not make any findings with regard to what specifically constitutes “high risk” processing of personal data related to the Huduma Card in this judgment. However, when adjudicating on the initial 2020 petition, the Court implied an overall high risk of the entire NIIMS system. For instance, in prohibiting collection of GPS coordinates and DNA data, the Court stated that collection of such data would be intrusive and carries with it the risk of privacy violations and surveillance.

However, there have been attempts elsewhere to make the DPIA triggering criteria objective through denoting situations that amount to “high risk”. Kenya’s KDPA adheres to the “high risk” criteria, similar to EU’s General Data Protection Regulation (GDPR). 

Beyond the scope of NIIMS, the fact that the data protection regulations have not yet come into force to provide clarity around the situations that necessitate a DPIA as well as how to proceed with carrying out a DPIA, could negatively affect other data controllers and processors and consequently, the data subjects. The Regulations are currently before the Delegated Legislation Committee in Parliament awaiting comments. These Regulations will be deemed to have been approved after 28 days from the publication date. Nevertheless, the fact that the conditions triggering the obligation to carry out a DPIA have not yet come into force does not diminish the data controllers’ general obligation to implement measures to appropriately manage risks for the rights and freedoms of data subjects. Even without the explicit requirement to conduct a DPIA, controllers must continuously assess the risks created by their processing so as to identify when a processing is likely to result in a high risk to rights and freedoms of data subjects. 

In light of the present judgment, it will be interesting to see whether data collected and held in information systems created before the KDPA came into force and after the Constitution was adopted in 2010 will be subjected to DPIAs, if they meet the criteria for conducting a DPIA. This is crucial as the Huduma Card case shows that the KDPA could act retroactively. If the court’s rationale in the Huduma Card case is anything to go by, it is likely that the KDPA could apply retroactively for such information systems. The court in its analysis stated: “it is clear that the Act was intended to be retrospective to such an extent or to such a time as to cover any action taken by the state or any other entity or person that may be deemed to affect, in one way or the other, the right to privacy under Article 31 (c) and (d) of the Constitution”.

4.2 Dispute Resolution in Data Protection Cases

In addition to the issue of conducting a DPIA that formed the main argument, the court also deliberated on the issue of handling complaints under the KDPA that could have persuasive impact on future data protection cases. In deciding whether to give an audience to the applicants, the court dealt with the issue of whether it had jurisdiction to hear the matter.  The question of jurisdiction, as presented by the interested party (the Data Protection Commissioner) arose from the fact that one of the applicants, Yash Pal Ghai, described in the matter as an affected data subject, claimed that rolling out Huduma Cards without a DPIA would prejudice his rights as a data subject under the KDPA. 

Owing to objections raised by the interested party, the Data Protection Commissioner (DPC), the court found that the applicant could not, in the given circumstances, approach the court directly. To obtain redress, the data subject was required to first exhaust all other available dispute resolution means as stipulated in the KDPA and the Data Protection (Civil Registration) Regulations before seeking court intervention. The Data Protection (Civil Registration) Regulations provides an internal complaint handling procedure. Regulation 23(1) provides that an aggrieved data subject may lodge a complaint with the civil registration entity. 

Further, Regulation 23(6) provides that a data subject has a right to appeal to the Data Commissioner if the data subject is dissatisfied with the decision of the civil registration entity. Section 56(1) of the KDPA provides that “A data subject who is aggrieved by a decision of any person under this Act may lodge a complaint with the Data Commissioner in accordance with this Act”. If the data subject wanted to opt out of dispute resolution mechanisms under the KDPA and the Data Protection (Civil Registration) Regulations, they had to make an application to court explaining why such mechanisms are not efficient. The court upheld this objection. Katiba Institute, however, was allowed to bypass the dispute resolution mechanisms provided under the KDPA and the Data Protection Regulations for two reasons:

  1. They do not fall under the category of a data subject. The KDPA describes a data subject as an identified or identifiable natural person who is the subject of personal data.
  2. Their application was based on grounds of public interest. Article 22(2)(c) of the Constitution permits instituting court proceedings by a person acting in the public interest.

The court thus found that Katiba Institute had sufficient interest in decisions made by any person under the KDPA despite not being a data subject.

Effective handling of complaints related to data protection is crucial for consumers and businesses. As personal data processing activities in Kenya are now subject to the KDPA unless they fall under exemptions (even then there are minimum requirements of processing) it is important that institutions involved are clear on their respective obligations. This decision is a good starting point on who and when a data protection dispute can be brought to court. With respect to maintaining institutional autonomy, this is a significant move as it indicates the court’s intention to not interfere with nascent administrative bodies with quasi-judicial functions.

While the Office of the Data Protection Commissioner (DPC) and the civil registration entities are being granted independence to oversee enforcement of the KDPA and related regulations, it will be important to further delineate how far such bodies can go with regards to dispute resolution. When can an aggrieved data subject or data controller bypass the DPC and approach the court where their rights and freedoms under the DPA are violated or obligations are under threat respectively? This begs the question, how will the DPC interact with the courts? To explore this, it is crucial to first highlight the role of the court in data protection dispute resolution as per the KDPA:

  1. Issuing a search warrant to enter a premise for the purpose of discharging any function (including dispute resolution) or power under the KDPA.[1]
  2. Hearing appeals against administrative actions such as enforcement and penalty notices taken by the DPC.[2]
  3. Issuing preservation orders to preserve personal data that is vulnerable to loss or modification.[3] This is useful during investigations.

Thus far, the role of the court appears to be secondary in first instance dispute resolution with the DPC having priority to determine the existence of an infringement. This can be justified under the Fair Administration Act (FAA), the legislation that deals with administrative action.[4] On the other hand, the Constitution provides citizens with the right to approach courts where their rights and freedoms are violated.[5] This includes the Constitutionally protected right of privacy from which the KDPA emanates. As for judicial and quasi-judicial decisions, the Constitution provides that “the High Court has supervisory jurisdiction over the subordinate courts and over any person, body or authority exercising a judicial or quasi-judicial function, but not over a superior court”. Based on these court findings, the court appears to recognize the importance of a data protection authority. However, it shall have to balance this against the Constitutionally protected right to institute court proceedings by anyone whose rights and freedoms are affected.

Conclusion

Pursuant to the High Court order for a DPIA to be conducted, the relevant ministry complied and conducted the assessment pointing to an acknowledgment of the importance of accountability with regards to sensitive citizen data. The assessment is not yet public. As the DPIA was the sole requirement to proceed with issuing the Huduma Card, it is expected that the rollout will continue, unless further challenges are successfully made. 

Case law is key in providing guidance on interpreting statutes. It is for this reason that this latest judgment is of great significance to both the future of government led digital ID initiatives such as Huduma Namba, data subjects and businesses as it could shape how the implementation of the KDPA proceeds in the future. Given that a key focus in data protection now is initial implementation of the KDPA, clarity in issues such as whether to conduct DPIAs and forum for dispute resolution will be crucial in ensuring that data processing activities are performed in compliance with the law.  


[1] Section 60, Data Protection Act (2019)

[2] Section 64, Data Protection Act (2019)

[3] Section 66, Data Protection Act (2019)

[4] Section 9(2), (3), Fair Administration Act (2015)

[5] Article 22, Constitution of Kenya (2010)

BCI Commercial and Government Use: Gaming, Education, Employment, and More

fpf bci report socialgraphics 1200x630 v2

This post is the third in a four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.

Click here for FPF and IBM’s full report: Privacy and the Connected Mind. In case you missed them, read the first and second blog posts in this series. The first post unpacks BCI technology, while the second analyzes BCI applications in healthcare and wellness, the risks associated with these applications, and the implicated legal regimes. Additionally, FPF-curated resources, including policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are here.

I. Introduction: What are BCIs?

BCIs are computer-based systems that directly record, process, or analyze brain-specific neurodata and translate these data into outputs. Those outputs can be used as visualizations or aggregates for interpretation and reporting purposes and/or as commands to control external interfaces, influence behaviors or modulate neural activity. BCIs can be broadly divided into three categories: 1) those that record brain activity; 2) those that modulate brain activity; or 3) those that do both, also called bi-directional BCIs (BBCIs). 

BCIs can be invasive or non-invasive and employ a number of techniques for collecting neurodata and modulating neural signals. Neurodata is data generated by the nervous system, which consists of the electrical activities between neurons or proxies of this activity. This neurodata may be “personal neurodata” if it is reasonably linkable to an individual.

II. BCIs are Entering into the Commercial and Enterprise Market in the Fields of Gaming, Employment, Education, and other Future-Facing Areas.

Gaming: BCIs could augment existing gaming platforms and offer players new ways to play using devices that record and interpret their neural signals. Current examples of BCI gaming combine neurotechnology with existing gaming devices or platforms. These devices attempt to record the user’s electrical impulses, collecting and interpreting the player’s brain signals during play. While most gaming BCIs are single-player, researchers are exploring whether BCIs can provide multiplayer experiences using multi-person non-invasive brain-to-brain interfaces (BBIs). One example of a multiplayer BCI is BrainNet, where three participants exchange neural singles to play a Tetris-like game. BCI can also be applied to augment games on extended reality (XR) devices

Today’s BCI games are not fully immersive experiences. Players can use neurotechnology to perform only discrete actions. Future BCI games may offer greater immersion by combining neurodata with other biometric and psychological information, which could allow players to control in-game actions using their conscious thoughts.

Employment: BCIs can monitor worker engagement to improve safety, alert workers or supervisors of dangerous situations, and help make operational or employment decisions. Life and AttentivU are examples of BCIs that track and promote worker attentiveness during tasks. These BCIs can also provide notifications when an employee exhibits fatigue or drowsiness. Other employment BCIs measure neurodata to determine a worker’s emotional state. Management could choose to use this neurodata to gauge efficiency, manage workloads, determine worker happiness levels, or make hiring, firing, or promotion decisions. 

Employment BCIs can also be used to modulate workers’ brain activity for purposes of improving performance. Transcranial direct current stimulation (tDCS) could be used to promote multitasking with this goal in mind. Invasive BCIs, such as Elon Musk’s Neuralink, are also being evaluated for their potential to increase efficiency during high-pressure and time-sensitive tasks.

Education: BCI technology could be implemented in learning environments to gather student neurodata. This neurodata could reveal whether a student is finding an assignment challenging, which creates opportunities to moderate the amount and level of work, or help teachers and parents assess and improve classroom engagement.

Future-Facing Fields: Smart Cities, Connected Vehicles, and Neuromarketing: BCIs could be applied to augment activities in other contexts. Researchers are exploring the possibility of integrating BCIs into smart cities and communities to enhance public safety, city and transportation efficiency, and energy monitoring. BCIs could also provide new methods for controlling connected vehicles and determining driver attention

Researchers have used neurotechnology to record physiological and neural signals with varying degrees of accuracy. Recorded neurodata can reveal a consumer’s mood, motivations, and preferences when they buy and use a product or service. Product makers and advertisers can utilize this data to better understand consumer choices.

III. Privacy and Other Risks Associated With BCIs in Gaming, Employment, Education, and Future-Facing Fields: From Profiling to Neurodata-based Decision Making.

BCI applications in these spaces present common and area-specific risks and considerations. 

Powered-up Profiling: Gaming and neuromarketing BCIs involve neurodata collection, including user reactions to content in a virtual world. AI and machine learning models can be trained on this neurodata, in combination with other biological changes in response to content, to associate user-specific changes in neural signals to certain physiological states. Neurodata could therefore facilitate the creation of granular profiles on individuals. Since neurodata can capture an individual’s reactions to sensitive content, these profiles may offer intimate portraits into the user’s health, sexual preferences, and even vices. 

Organizations could use these profiles to make inferences and decisions. Recognizing this neurodata’s value, organizations collecting and retaining neurodata across sectors may also be incentivized to share or sell it with advertisers. Advertisers could take this information and use it to create more directed behavioral ads, which could encourage unhealthy habits. 

Lack of Transparency and Control Over Disclosure: Unlike some other personal information sources, users cannot control the electrical impulses that create neurodata. Whether participating in BCI games or acting online more generally, users are therefore often unaware of neurodata tracking. This means users have less control over personal neurodata flows, which increases the likelihood that this data will be used for purposes unrelated to those it was collected for. Even when a person has control—by requiring opt-in consent, for example—over neurodata monitoring, the individual may feel compelled to share neurodata with someone (e.g., an employer) to avoid retaliation or disparate treatment.

Neurodata-Based Decision Making and BCI Accuracy: The amount and sensitive nature of some neurodata generated in entertainment, employment, education and neuromarketing could inform important decisions. These decisions could impact a person’s life, from the content a user receives in virtual game worlds to whether an employee is promoted or discharged. Concerns about neurodata informing decisions are exacerbated by BCIs collecting inaccurate data. Decisions informed by inaccurate neurodata may contribute to diverse harms, including the perpetuation of feedback loops that fuel societal division. 

Chilling Speech and Creating Distrust in Institutions: BCI-enabled monitoring may chill speech and reduce trust in institutions among employees, students, and the general public. Employees who know that they are constantly monitored may place less trust in their employer, lose morale, or refrain from certain behavior. Monitoring may cause students, especially those from communities that have been historically targeted by surveillance or suffer from learning differences, to refrain from certain speech and thoughts in order to avoid retaliation or stigmatization. BCIs incorporated into smart city infrastructure could generate new sources of personal data and enable more invasive surveillance. 

IV. Regulations that Might Cover BCIs and Neurodata Include Comprehensive Privacy Laws, Sectoral Privacy Laws, and Self-Regulatory Frameworks.

Comprehensive Privacy Laws and Agency Authority: Both US and foreign comprehensive privacy laws may regulate BCI use and the processing of neurodata. The EU’s General Data Protection Regulation (GDPR) and the California Privacy Rights Act (CPRA) define biometric information broadly, meaning that neurodata may fall within these laws’ scope. However, both laws are framed in terms of whether the data is actually used to or could be used to single out an individual. Concepts such as Brittan Heller’s “biometric psychography”—information from the body used to determine interests, not identity—may not be interpreted as covered, because this information is neither used nor could be used to facilitate identification.

If triggered, the GDPR and CPRA impose obligations on regulated organizations and grant rights to data subjects. Neurodata processing may implicate special rules under these laws. For example, an organization using personal neurodata in marketing would trigger CPRA’s opt out right for “cross-contextual advertising.” While US law generally gives companies significant discretion when writing privacy policies affecting at-will employees, the GDPR indicates that a worker’s consent cannot serve as a lawful basis for processing the employee’s personal data. US Administrative Agencies may also have powers enabling the policing of certain BCI applications. The Federal Trade Commission (FTC) has authority to investigate and enforce penalties against organizations for unfair and deceptive practices, such as those related to advertising, for example.

Sectoral Privacy Laws: The Children’s Online Privacy Protection Act (COPPA) may apply to game operators if they collect, use, or disclose “personal information,” and either target games toward children under 13 or have actual knowledge that such children are using the game. Whether gaming BCIs are regulated under COPPA in part depends on the meaning of “personal information.” Neurodata collected by gaming BCIs could be “personal information” under COPPA if it is considered a “persistent identifier”—a kind of personal information—or if the FTC changes “personal information” to cover biometric data. COPPA gives rights to parents and guardians over their children’s personal information, including access and deletion rights. The statute also imposes obligations on operators, such as obtaining parental consent before collecting information from the child. 

Biometric-specific state laws in the US, such as Illinois’ Biometric Information Privacy Act (BIPA), may impact neurodata processing across sectors. Whether these laws apply, however, depends on the meaning of “biometric identifiers.” Under BIPA, this term is important, as it affects what “biometric information” can be based on. While other state biometric laws, such as Washington’s statute, contain broad definitions, BIPA defines “biometric identifier” narrowly to include “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.” Neurodata-based information used as an identifier will therefore more likely fall outside of BIPA’s scope, since it is not considered a “biometric identifier.” 

BCIs used to monitor workers may implicate employment law. The Electronic Communications Privacy Act (ECPA) limits some types of employee monitoring. However, ECPA permits employers to monitor workplace communications, especially when those conversations take place on company devices like company-owned computers and telephones. Anti-discrimination laws, like the Americans with Disabilities Act (ADA), may stop employers from using BCI results in hiring and firing decisions if the results reflect a disability. 

Federal, state, and local student data laws may grant rights to students and parents while imposing requirements on schools and neurotech companies with respect to the processing of personal neurodata. BCI use may be impacted by the Family Educational Rights and Privacy Act (FERPA), which protects education records—including biometric education records—at schools that receive federal funding. A student’s personal neurodata could be part of this record and would therefore receive FERPA protections. These protections include rights for parents and children over 17 years, and obligations on schools. All 50 states and Washington, DC have introduced student privacy legislation, and some could impact BCI use in schools. District and school-level rules may also affect neurodata collection and processing. 

Self-regulatory initiatives: Beyond laws and agency enforcement, voluntary self-regulation also impacts the use of BCIs. Neuromarketing is an example of this, where the Neuromarketing Science & Business Association’s (NMSBA’s) Code of Ethics identifies several commitments ranging from consent and transparency that organizations should follow when using BCIs for neuromarketing purposes.

V. Conclusion

Commercial and government BCIs could deliver dividends ranging from novel gaming experiences to more efficient workforces. However, such applications also create privacy risks. While the law could affect how these technologies are used, the scope of existing rules means that certain applications of BCIs are not addressed by current regulatory structures.

Read the next blog post in the series: Technical and Policy Recommendations to Mitigate Privacy Risks