Key FPF-curated background resources – policy & regulatory documents, academic papers, and technical analyses regarding brain-computer interfaces are available here.
Recently, Elon Musk livestreamed an update for Neuralink—his startup centered around creating brain-computer interfaces (BCIs). BCIs are an umbrella term for devices that detect, amplify, and translate brain activity into usable neuroinformation. At the event, Musk unveiled his newest BCI, implanted in the brain of a pig.
Musk predicts that future BCIs will not only “read” individual’s brainwaves, but also “write” information into the user’s brain to accomplish such goals as identifying medical issues and allowing users with limited movement to type using their thoughts. Musk even mentioned the long-term prospect of downloading memories. Explaining Neuralink’s newest device, “In a lot of ways,” Musk said, “It’s kind of like a Fitbit in your skull, with tiny wires.”
It can be hard to separate the facts from the hype surrounding cutting edge product announcements. But it is clear that brain-computer interfaces are increasingly used by hospitals, schools, individuals, and others for a range of purposes. It is equally clear that BCIs often use sensitive personal data and create data protection risks.
Below, we explain how BCI technologies work, how BCIs are used today, some of the technical challenges associated with implementing the technologies, and the data protection risks they can create. An upcoming FPF paper examines these issues at greater length.
We conclude with five important recommendations for developers intent on maximizing the utility and minimizing the risks of BCIs:
(1) Employ Privacy Enhancing Technologies to Safeguard Data;
(2) Ensure On/Off User Controls;
(3) Enshrine Purpose Limitation;
(4) Focus on Data Quality; and
(5) Promote Security.
What are Brain-Computer Interfaces? //
Some BCIs are “invasive” or “semi-invasive”—implanted into a user’s brain or installed on the brain surface—like Neuralink. But many others are non-invasive, commonly utilizing external electrodes, which do not require surgery. The three main types of BCIs are:
1). Invasive BCIs, which are installed directly into the wearer’s brain, are typically used in the medical context. For example, clinical implants have been used to improve patients’ motor skills. Invasive implants can include devices like an electrode array called a Utah array, and new inventions like neural dust and neural lace which drape over or are inserted into multiple areas within the brain.
2). Semi-invasive BCIs are often installed on top of the brain, rather than into the brain itself. Such BCIs rely on electrocorticography (ECoG), in which electrodes are attached on the exposed surface of the brain to measure electrical activity of the cerebral cortex. ECoG is most widely used for managing epilepsy. The Neuralink device is promoted as a coin-sized semi-invasive implant that attaches to the surface of a user’s brain and sends signals to an external device.
3.) Non-invasive BCIs typically rely on neuroinformation gathered from electroencephalography (EEG). EEG is a common method for recording electrical activity, with electrodes placed on the scalp to measure neural activity. Other non-invasive techniques use brain stimulation. For example, transcranial direct current stimulation (tDCS) sends low level currents to the frontal lobes. While non-invasive BCIs might be an attractive option for headset integration, “non-invasive” is not synonymous with harmless. BCIs are a relatively new technology with the potential for health, privacy, and security risks.
Individuals may be uneasy about devices that can read a user’s thoughts, or alter the composition of these thoughts. However, it is important to note that today’s BCIs do not read or modify thoughts—instead, they rely on machine learning algorithms that have been trained to recognize brain activity in the form of electrical impulses and make inferences about emotional states, actions, and expressions.
Regardless of the technique used, collecting and processing brain signals to derive useful neuroinformation can be a challenging process. Most data derived via BCIs is noisy (especially in the case of non-invasive applications), and creating computer systems that can identify and remove noise is a complex and cumbersome undertaking. After actionable signals are gathered, various artificial intelligence and machine learning models are applied to extract and classify useful neuroinformation. The final task is to accurately translate and match neuroinformation to the desired outcome or action—a process researchers are still attempting to master.
The Benefits of BCI Technologies & Top of Mind Privacy and Security Risks //
BCIs are, and will continue to be, deployed in doctor’s offices, schools, workplaces, and within our homes and communities. As a novel interface, BCIs hold incredible promise across numerous sectors, from health to gaming. However, each sectoral use comes with a host of unique privacy and security risks.
In the health and wellness sector, BCIs are used to monitor fatigue, restore vision and hearing, and control accessibility devices like wheelchairs, as well as help power prosthetic limbs. The propensity of BCIs to not only read brain signals but to potentially stimulate activity in the brain can create benefits for patients. In the diagnostic and treatment arena, BCIs lessen and sometimes eliminate the need for subjective patient responses. In one NIH-funded study, BCIs were used to detect glaucoma progression over time using objective measurements made by the BCI; this represents a potentially substantial diagnostic improvement over traditional glaucoma assessment, which relied on subjective patient-generated data. In the accessibility space, BCIs have spun off a new generation of neuroprosthetics, or artificial limbs that move in response to patients’ thoughts, and have aided in the creation of BCI-powered wheelchairs.
While health-related BCIs are promising treatments for some patients, they can be vulnerable to security breaches. Recently, researchers showed that hackers, through imperceptible noise variations of an EEG signal, could force BCIs to spell out certain words. According to the researchers, the consequence of this security vulnerability could range from user frustration to a severe misdiagnosis.
In the gaming context, non-invasive wearables outfitted with EEG electrodes provide players with the ability to play games and control in-game objects using their thoughts. Games such as The Adventures of Neuroboy allow players to move objects in the game using their thoughts, which are measured through an EEG-fitted cap. Companies like Neurable are looking to push the limits of user interaction even further by developing AR/VR headsets outfitted with EEG electrodes that collect brainwave data—this data acts as the primary driver of gameplay. In Neurable’s first demo, Awakening, the player assumes the role of a psychokinetically-gifted child who must escape from a government prison. Through reading the player’s electrical brain impulses, the BCI lets the player choose between a host of objects to escape from prison and advance though the game.
BCIs can make games more immersive for players and give game developers novel tools. But Advances in immersive gaming depend on the collection of neuroinformation, which can lead to heightened privacy risks. Existing immersive games in the AR/VR space often rely on collecting and processing potentially sensitive personal information such as geolocation data and biometric data like the player’s gait and eye movement, as well as audio and video recordings of the player. Future-looking gaming hardware, such as the headset being developed by Neurable, could pair neuroinformation with vast sets of sensitive personal information, which increases the chances of user identifiability, while potentially revealing sensitive biological information about the player. This data could be used for a range of commercial purposes, from product improvement and personalization to behavioral advertising and profiling.
BCIs can also be found in schools, where companies claim they can measure student attentiveness. For example, BrainCo, Inc. is developing BCI technology that involves students wearing EEG-fitted headbands in class. The students’ neuroinformation is gathered and displayed on a teacher’s dashboard which allegedly provides insight into student attention levels. In the future, BCIs might be deployed in the education arena to aid students with learning disabilities; some schools already employ AR and VR technologies for this purpose. While personalized education metrics can be helpful to students, parents, and teachers, inaccurate BCI data in the education context could lead to false conclusions about student aptitude, and accurate information could put students at risk of disproportionate penalties for inattentiveness or other behavior.
Measuring attentiveness through the use of BCIs is not unique to the education space. Currently, most of the uses of BCIs in the workplace are proposed as ways to measure engagement and improve employee performance during high-risk tasks. BCIs deployed in the employment context can raise the risks of employer surveillance and discrimination (e.g., data about workers’ emotions could lead to penalties, firing decisions, and other actions).
One of the most future-looking applications of BCIs is in the smart cities and communities space. As early as 2014, researchers proposed a prototype for a bluetooth-enabled BCI that could help disabled individuals control and direct a smart car over short distances, increasing individuals’ independence. These prototypes, while still very much a work in progress, open the possibility for more complex uses in the future. The potential benefits are substantial, but these technologies also create risks, including the collection and use of drivers’ neuroinformation in combination with other sensitive data, such as location, when controlling the vehicle. Additionally, mind-controlled cars pose the potential public safety risk of driving a car using often imprecise, opaque, and sometimes inaccurate neuroinformation.
In addition to sector-specific privacy risks, BCIs are generally susceptible to the same drawbacks and potential harms associated with other algorithmic processes. For example, harmful bias, a lack of transparency and accountability, as well as a reliance on faulty training data, can lead to individual and collective losses in opportunity. Implementing accurate autonomous systems presents its own set of challenges such as: whether a particular system is appropriate to achieve a desired outcome; whether the systems are designed (and re-designed) to reduce bias; and whether the system raises ethical or legal risks.
As a novel interface, BCIs raise important data protection questions that should be addressed throughout their development cycle. Below, we put forward just a handful of the high-level recommendations that developers should adhere to when seeking to create inclusive, and privacy-centric, brain-computer interfaces.
Key Recommendations for BCI Development //
Because the collection and use of neuroinformation involves a number of privacy and ethical concerns that go beyond current laws and regulations, stakeholders working in this emerging field should follow these principles for mitigating privacy risks:
(1) Employ Privacy Enhancing Technologies to Safeguard Data – BCI providers should integrate recent advances in privacy enhancing technologies (PETs), such as differential privacy, in accordance with principles of data minimization and privacy by design.
(2) Ensure On/Off User Controls – Wherever appropriate, BCI users should have the option to control when their devices are on or off. Some devices may need to always be on in order to fulfill their functions—for example, a BCI that treats a neurological condition. However, when being always on is not an essential feature of the device, users should have a clear and definite way to turn off their device. As with other devices, there are considerable privacy risks when a BCI is always gathering data or can be turned on unintentionally.
(3) Enshrine Purpose Limitation — BCI providers should state the purpose for collecting neuroinformation and refrain from using that information for any other purpose absent user consent. For example, if an educational BCI gauges student attentiveness for the purpose of helping a teacher engage the class, it should not use attentiveness data for another purpose—like ranking student performance—without express and informed consent. Additionally, BCI providers should also consider limiting unnecessary cross-device collection.
(4) Focus on Data Quality — Providers should strive to use the most accurate data collection processes and machine-learning tools available to ensure accuracy and precision. Algorithmic explainability and reproducibility of results are critical components of accuracy. It is important for BCIs to be both accurate (turning neural signals into correct neuroinformation) and precise (consistently reading the same signals to mean the same thing).
(5) Promote Security — BCI providers should take appropriate measures to secure neuroinformation. BCI devices should be secure against hacking and malware, and company servers should be secure against unauthorized access and tampering. Furthermore, data transfers should be accomplished by secure means, subject to strong encryption.
Moving Forward //
The Future of Privacy Forum is working with stakeholders to better analyze these issues and make recommendations regarding appropriate data protections for brain-computer interfaces. Curated news, resources, and academic papers on BCIs and related topics are available here. We welcome your thoughts and feedback at [email protected] and [email protected].
Additional Resources //
In addition to the resources we curated here; we found the following academic papers, white papers, ethical frameworks, and policy frameworks helpful to understanding the current BCI landscape.
Academic Papers //
App Stores for the Brain: Privacy & Security in Brain-Computer Interfaces—by Tamara Bonaci, Ryan Calo, and Howard Chizeck—noting that BCI-enabled technology carries a great potential to improve and enhance the quality of human lives, while also identifying some of the key privacy and security risks associated with BCIs.
Four Ethical Priorities for Neurotechnologies and AI—by Rafael Yuste, et. al—noting that BCI technology can exacerbate social inequalities and offer corporations, hackers, governments and others new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency and an understanding of individuals as entities bound by their bodies.
Keeping Disability in Mind: A Case Study in Implantable Brain-Computer Interface Research—by Laura Specker Sullivan, Eran Klein, Tim Brown, and Matthew Sample—noting that developers espouse functional, assistive goals for their technology, but often note uncertainty in what degree of function is “good enough” for the individual end user.
Towards New Human Rights in the Age of Neuroscience and Neurotechnology—by Marcello Ienca and Roberto Andorno—assessing the implications of emerging neurotechnology applications in the context of human rights frameworks and suggests that existing human rights may not be sufficient to respond to these emerging issues.
White Papers //
iHuman: Blurring Lines between Mind and Machine—by The Royal Society—arguing that neural interface technologies will continue to raise profound ethical, political, social and commercial questions that should be addressed as soon as possible to create mechanisms to approve, regulate or control the technologies as they develop, as well as managing the impact they may have on society.
Policy Frameworks //
IEEE Neuroethics Framework: Addressing the Ethical, Legal and Social Implications of Neurotechnology—by IEEE—a developing matrix intended to act as a “living document” that will evolve with new BCI technology and new ethical, legal, and social issues, ideas, and perspectives.
OECD Recommendation on Responsible Innovation in Neurotechnology—adopted by the OECD Council in December 2019—this recommendation is the first international standard in this domain. It aims to guide governments and innovators to anticipate and address the ethical, legal and social challenges raised by novel neurotechnologies while promoting innovation in the field.
Standards Roadmap: Neurotechnologies for Machine Interfacing—available through IEEE—noting the need for standards in the BCI arena and providing an overview of the existing and developing standards in the field of neurotechnologies for brain‐machine interfaces. The roadmap is broken into five man topics: (1) the science behind sensing technologies, (2) feedback mechanisms, (3) data management, (4) user needs, and (5) performance assessments of BCIs.
Image courtesy of Pixabay, available here.