BCI Technical and Policy Recommendations to Mitigate Privacy Risks
This is the final post of a four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.
Click here for FPF and IBM’s full report: Privacy and the Connected Mind. In case you missed them, read the first, second, and third blog posts in this series. The first post unpacks BCI technology. The second and third posts analyze BCI applications in healthcare and wellness, commercial, and government, the risks associated with these applications, and the implicated legal regimes. Additionally, FPF-curated resources, including policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are here.
I. Introduction: What are BCIs?
BCIs are computer-based systems that directly record, process, or analyze brain-specific neurodata and translate these data into outputs. Those outputs can be used as visualizations or aggregates for interpretation and reporting purposes and/or as commands to control external interfaces, influence behaviors, or modulate neural activity. BCIs can be broadly divided into three categories: 1) those that record brain activity; 2) those that modulate brain activity; or 3) those that do both, also called bi-directional BCIs (BBCIs).
BCIs can be invasive or non-invasive and employ a number of techniques for collecting neurodata and modulating neural signals. Neurodata is data generated by the nervous system, which consists of the electrical activities between neurons or proxies of this activity. This neurodata may be “personal neurodata” if it is reasonably linkable to an individual.
II. Stakeholders Should Adopt Both Technical and Policy Guardrails to Promote Privacy and Responsible Use of BCIs
From healthcare to smart cities, BCI-facilitated data flows can augment society by improving operations and offering novel insights into long-term problems. However, this nascent technology also creates privacy risks and raises other concerns. As BCIs spread to new realms of activity, existing accountability and enforcement structures may not respond to the challenges raised by these novel BCI applications. Some regulators have already reacted to these perceived inadequacies by creating and reforming policy and legal frameworks. To promote privacy and responsible BCI use, novel technical and policy approaches may also be required to mitigate against potential risks.
A. Technical Recommendations
Providing On/Off and App Controls to Users: Privacy risks arise when a BCI device continuously collects data or is unintentionally switched on. These features may prevent users from exercising control over personal neurodata, because they are unaware that the collection is occurring in the first place. On/off and granular controls on devices and in companion apps can mitigate against these privacy risks by enhancing a user’s ability to manage neurodata flows.
End-to-End Encryption of Sensitive Neurodata and Privacy Enhancing Technologies: Developers should explore a variety of measures to promote privacy and protect neurodata during collection and processing. End-to-end encryption can be used to protect sensitive personal neurodata in transit and at rest. Privacy enhancing technologies (PETs) such as differential privacy and de-identification methods—Privacy Preserving Data Publishing (PPDP) for stored and shared data, to name one—can also help BCI developers maximize neurodata’s utility while protecting the identity of the person to whom the neurodata belongs.
B. Policy Recommendations
Rethinking Transparency and Control: A BCI’s technological capabilities, purposes, and user bases will impact the privacy risks these devices pose, and they may shift with changes in context. These variations will inform the appropriate levels and methods of transparency required to encourage informed consent and provide insights into device capabilities, data flows, data storage, and who controls and has access to the data.
Developers and regulators should therefore identify measures facilitating a level of transparency that both gives users meaningful control over personal neurodata and reflects a particular BCI application’s privacy risks. While privacy policies and similar documents are often required by law, these policies frequently fail to provide sufficient levels of transparency. Even if the document’s contents are accurate, users may not read them or, if they do, may still find it challenging to understand what is happening with their data. On-device indicators could be marshaled to ameliorate this notice problem; visual or audio indicators may improve transparency and control by informing users when neurodata collection or modulation occurs.
Institutional Review Boards, Ethical Review Boards, and Multi-Stakeholder Engagement: Collecting neurodata and deploying BCI technology may require review and/or approval. BCI providers that are gathering primary research data from human subjects or pre-registering clinical trials may need to complete an institutional review board (IRB) review. Other organizations may need to obtain approval from bodies, such as the Food and Drug Administration (FDA), before selling a BCI product. However, many consumer-facing BCIs are not subject to these requirements. Providers of consumer-facing BCIs that want to have strong privacy protections can still subject these BCIs to ethical review board (ERB) oversight. ERBs can consider questions, including those relating to neurodata collection, use, access—when neurodata is sought for research purposes, but obtaining user consent is impractical, for instance—and storage.
When appropriate, organizations developing BCIs should also facilitate multi-stakeholder engagement during the BCI’s development and deployment lifecycle. The consultations should consist of those affected BCIs, and not just researchers, policymakers, and initial adopters. Individuals who are impacted by BCIs include people from marginalized communities, such as the disabled and historically-surveilled populations. BCI developers should actively seek out and incorporate these communities’ feedback into product development and deployment decisions. Developers should also recognize that a product may need to be heavily altered or scrapped to respect community input or avoid harm.
Standards Setting and Other Agreements: Companies, research institutions, and policymakers should set policy and technical standards for BCI research, development, and use that can adapt to changes in the technology, user base, and applications. Some of these standards may be taken from existing policy frameworks, but the unique risks posed by BCIs may require novel approaches, too. As previous blog posts discuss, there is no consensus on the types of neurodata that can or will be interpreted as biometric data under current laws. This impacts whether some regulations apply to neurodata, resulting in categories of data such as Brittan Heller’s “biometric psychography” potentially lying outside any law. Policymakers may therefore need to re-evaluate conceptions of biometrics to account for BCI applications. Alongside technical and policy standards, industry and regulators should promote up-to-date training for developers around processes such as data handling and de-identification learned from academia.
Open Neurodata Standards and Open Licenses for De-Identified Data: There are large barriers affecting the deployment of BCIs due to the high cost of research and development. Proprietary systems may hinder the exchange of best practices and tools that are needed to fuel a thriving research and development environment. To prevent stagnation, stakeholders should collaborate to develop and adopt open neurodata standards and also consider whether using open licenses for de-identified neurodata research sets is possible and appropriate.
III. Conclusion: Balancing New Data Flows Against BCI Privacy Risks
As BCIs evolve and become more available across numerous sectors, stakeholders must understand the unique risks these technologies present. Key to this understanding is an assessment of how these technologies work and what data is necessary for them to function, as many risks attributed to BCI applications flow from these devices processing certain data.
The adoption of technical and policy recommendations that can make BCI data less identifiable, less potentially harmful, and more secure could minimize privacy and data governance risks. However, the evolution of BCIs will require developers, researchers, and policymakers to differentiate between the risks that exist now and those that may emerge in the future. Only though this careful assessment can stakeholders identify the issues that require immediate attention versus those that need proactive solutions.
BCIs will also likely augment and be combined with many existing technologies that are currently on the market. This means that new technical and ethical issues are likely to arise and existing issues could be compounded by one another. In the near future, BCI providers, neuroscience and neuroethics experts, policymakers, and societal stakeholders will need to come together to consider what constitutes high-risk use in the field and make informed decisions around whether certain BCI applications should be prohibited, a position around which more robust and critical discussion is needed.
Finally, and perhaps more fundamentally, it is also possible that the future of privacy itself and our notions of what it means to have or obtain privacy at basic human or societal levels could be challenged in ways that we cannot currently comprehend or anticipate. We hope this report and our ongoing work helps support the technical, legal, and policy developments that will be required to ensure the advances in this sector are implemented in ways that benefit society.