Brain-Computer Interfaces: Privacy and Ethical Considerations for the Connected Mind
A forthcoming FPF and IBM report focusing on BCI privacy and ethics will be published in November 2021.
FPF-curated educational resources, policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are available here.
Introduction
Brain-computer interfaces (BCIs) are a prime example of an emerging technology that is spawning new avenues of human-machine interaction. Communication interfaces have developed from the keyboard and mouse to touchscreens, voice commands, and gesture interactions. As computers become more integrated into the human experience, new ways of commanding computer systems and experiencing digital realities have trended in popularity, with novel uses ranging from gaming to education.
Defining BCIs and Neurodata
BCIs are computer-based systems that directly record, process, analyze, or modulate human brain activity in the form of neurodata that is then translated into an output command from human to machine. Neurodata is data generated by the nervous system, composed of the electrical activities between neurons or proxies of this activity. When neurodata is linked, or reasonably linkable, to an individual, it is personal neurodata.
BCI devices can be either invasive or non-invasive. Invasive BCIs are installed directly into—or on top of—the wearer’s brain through a surgical procedure. Today, invasive BCIs are mainly used in the health context. Non-invasive BCIs rely on external electrodes and other sensors or equipment connected to the external surface of the head or body, for collecting and modulating neural signals. Consumer-facing BCIs primarily use various non-invasive methods, including headbands.
Key Applications and Top-of-Mind Privacy and Ethical Challenges
Some BCI implementations raise few, if any, privacy issues. For example, individuals using BCIs to control computer cursors might not not reveal any more personal information than typical mouse users, provided BCI systems promptly discard cursor data. However, some uses of BCI technologies raise important questions about how laws, policies, and technical controls can safeguard inferences about individuals’ brain functions, intents, or emotional states. These questions are increasingly salient in light of the expanded use of BCIs in:
- Health and Wellness – where BCIs monitor fatigue, diagnose medical conditions, stimulate or modulate brain activity, and control prosthetics and devices like wheelchairs.
- Gaming – where BCIs augment existing gaming platforms and offer players new ways to play using devices that record and interpret their neural signals.
- Employment – where BCIs monitor workers’ engagement to improve safety during high-risk tasks, alert workers or supervisors of dangerous situations, modulate workers’ brain activity to improve performance, and provide tools to more efficiently complete tasks.
- Education – where BCIs can track student attention, identify students’ unique needs, and alert teachers and parents of student learning progress.
- Smart Cities – where BCIs could provide new avenues of communication for construction teams and safety workers and enable potential new methods for connected vehicle control.
- Neuromarketing – where marketers incorporate the use of BCIs to intuit consumers’ moods, and to gauge product and service interest.
- Military – where governments are researching the potential of BCIs to help rehabilitate soldiers’ injuries and enhance communication.
It is important for stakeholders in this space to delineate between the current and near future uses and the far-distant notions depicted by science fiction creators. The realistic view of capabilities is necessary to credibly identify urgent concerns and prioritize meaningful policy initiatives. While the potential uses of BCIs are numerous, BCIs cannot at present or in the near future “read a person’s complete thoughts,” serve as an accurate lie detector, or pump information directly into the brain.
As BCIs evolve and are more commercially available across numerous sectors, it is paramount to understand the real risks such technologies pose. BCIs raise many of the same risks posed by home assistants, medical devices, and wearables, but implicate new and heightened risks associated with privacy of thought, resulting from recording, using, and sharing a variety of neural signals. Risks include, but are not limited to:
- Collecting, and potentially sharing, sensitive information related to individuals’ private emotions, psychology, or intent;
- Combining neurodata with other personal information to build increasingly granular and sensitive profiles about users for invasive or exploitative uses, including behavioural advertising;
- Making decisions that significantly impact patients, employees, or students based on information drawn from neurodata (with potential but distinct risks if the conclusions are accurately, or inaccurately drawn);
- Security breaches compromising patient health and individual safety and privacy;
- A lack of meaningful transparency and personal control over individuals’ neurodata; and
- Surveilling individuals based on the collection of sensitive neurodata, especially from historically and heavily surveilled communities.
These technologies also raise important ethical questions around fairness, justice, human rights, autonomy, and personal dignity.
A Mix of Technical and Policy Solutions Is Best for Maximizing Benefits While Mitigating Risks
To promote privacy-protective and ethical uses of BCIs, stakeholders should adopt technical measures including but not limited to:
- Providing hard on/off controls whenever possible;
- Providing granular user controls on devices and in companion apps for managing the collection, use, and sharing of personal neurodata;
- Operationalizing best practices for security and privacy when storing, sharing, and processing neurodata including:
- Employing appropriate privacy enhancing technologies;
- Encrypting sensitive personal neurodata in transit and at rest; and
- Embracing appropriate security measures to combat bad actors.
Stakeholders should also adopt policy safeguards including but not limited to:
- Rethinking transparency, notice, terms of use, and consent frameworks to empower users with a baseline of BCI literacy around the collection, use, sharing, and retention of their neurodata;
- Engaging IRBs, corporate review boards, ethical oversight, and other independent review mechanisms to identify and mitigate risks;
- Facilitating participatory and inclusive community input prior to and during BCI development and rollout;
- Creating dynamic technical, policy, and employee training standards to account for the gaps in current regulation; and
- Promoting an open and inclusive research ecosystem by encouraging the adoption, where possible, of open standards for the collection and analysis of neurodata and the sharing of research data under open licenses and with appropriate safeguards in place.
Conclusion
Because the neurotechnology space is especially future-facing, developers, researchers, and policymakers will have to create best practices and policies that consider existing concerns and strategically prioritize future risks in ways that balance the need for proactive solutions while mitigating misinformation and hype. BCIs will likely augment and complicate many existing technologies that are currently on the market, and privacy professionals will have to stay abreast of recent developments to protect this quickly growing space.
*Image courtesy of Gerd Altmann from Pixabay