FPF Testifies at FTC Data Portability Workshop

Yesterday, on September 22, 2020, the Federal Trade Commission held a public workshop, “Data To Go,” examining the benefits and challenges of data portability frameworks for consumers and competition. As a panelist during the first discussion, FPF’s Gabriela Zanfir-Fortuna discussed: how data portability operates in different commercial sectors; lessons learned from the GDPR and other global laws; and observations on the dual nature of data portability, as both a means to facilitate competition and a right of individuals to exercise control over their data.

The day-long workshop featured a wide range of privacy advocates, academics, government regulators, economists, and other experts. Below we provide key highlights from the workshop’s four panels: (1) Data Portability initiatives in the European Union, California, and India, (2) financial and health portability regimes, (3) reconciling the benefits and risks of data portability, and (4) realizing data portability’s potential: material challenges and solutions. 

 

Panel 1: Data Portability Initiatives in the European Union, California, and India

FPF’s Gabriela Zanfir-Fortuna served as a panelist during Panel 1 of the workshop, discussing lessons learned from the European Union’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and other data portability initiatives in India, Brazil, and Singapore. Other panelists included Inge Graef (Tilburg University), Rahul Mattan (Trilegal India), Stacey D. Schesser (Office of the California Attorney General), and Karolina Mojzesowics (European Commission), and the panel was moderated by Guilherme Roschke (FTC). 

Panelists agreed that the GDPR and the CCPA both conceptualize data portability as a right of the data subject, underpinned by the idea that individuals should have control over how personal data is collected and used. Panelists also agreed that portability plays an important role in competition, with Inge Graef noting the unique role in removing barriers to entry for data-driven startups. This is being reflected more and more in policy documents in the EU.  Rahul Mattan shared details about the complex and innovative framework that is currently being set up in India to allow portability and interoperability in the financial services sector.

Gabriela Zanfir-Fortuna added that many complex questions remain for how to make portability workable in practice, including: authentication and verification; personal data of the portability requestor that also includes personal data of third parties (such as in photographs or conversations); risks and responsibilities when porting data to services with weaker security or privacy protections; and other downstream uses of data, as reflected by the debates at the intersection of the Payment Services Directive 2 and the GDPR

Panelists also discussed and mostly agreed on key similarities and differences between the GDPR and the CCPA. The right of data portability is more limited and nuanced under the GDPR, excluding “inferences” from its scope. Unlike the GDPR, the CCPA also combines the right of access and portability — requiring personal data to be provided in a portable format following an access request. One of the key points emphasized by both Gabriela and Karolina Mojzesowics was that the experience of more than two years of the GDPR shows that the right to data portability is not used at its full potential, with very few requests being made to organizations. This is an issue that the European Commission intends to work on, increasing awareness, but also through initiatives to make portability practical. 

As CCPA enforcement begins, Stacey Schesser indicated that the Regulations adopted by the California Office of the Attorney General aimed to provide greater clarity on  authentication and verification mechanisms, to ensure that the “right to know” will not lead to unauthorized access to data. In addition, she mentioned that California’s Ballot Initiative (Proposition 24) may change the legal requirements if passed, as it would no longer explicitly refer to data portability, but still require data in subject access requests to be made available in a machine-readable format (an implied right to portability).

 

Panel 2: Financial and Health Portability Regimes (Case Studies)

Panel 2, moderated by Katherine White (FTC), explored case studies from the financial and health care sectors, including new health IT rules from the U.S. Office of the National Coordinator of Health Information Technology (ONC) and the UK’s Open Banking Initiative. Panelists from both the financial and healthcare sectors discussed the growing role of data portability in each sector, agreeing that consumer trust remains an important underlying issue for both.

In the healthcare industry, panelists remarked on the trend of data portability being used to improve individual access to their medical records. Dan Horbatt (Particle Health) discussed some of the technological and economic barriers to embedding data portability, and remarked that the biggest trend is towards more seamlessness in communicating patient permissions for collating medical records. Dan Rucker (US Department of Health and Human Services) remarked that individual data portability has long been a goal of HIPAA. Following the recent newly mandated US healthcare interoperability rules, Rucker envisions the proliferation of new apps to facilitate portability in the next few years, as well as more opportunities for the Internet of Things (IoT). Rucker highlighted the ongoing need for standardized tools to facilitate interoperability and portability for medical records. 

In the financial sector, panelists discussed Open Banking efforts in the United States and the UK. Open Banking refers to the use of open Application Programming Interfaces (APIs) to enable third-party financial service providers to access consumer transactions and other financial data from banks through new financial apps and services. According to Bill Roberts (UK Competition and Markets Authority), the UK’s Open Banking Rules were driven by a desire to address competition and promote an emerging fintech industry. Ongoing challenges remain with identification and authentication mechanisms, and Roberts noted that many banks are increasingly turning to biometric methods for authentication. Michael S. Barr (University of Michigan) observed that the UK, Singapore, India, and Australia have all made progress in Open Banking to improve user control over financial information, and to increase market competition. Although the US lags behind the U.K. in implementing open banking rules, Barr believes there is huge potential for consumers and greater competition. 

 

Panel 3: Reconciling the Benefits and Risks of Data Portability

In Panel 3, moderated by Ryan Quillian (FTC), panelists discussed the benefits and risks of data portability with an eye toward the twin aims of protecting consumers and promoting competition. Panelists agreed that data portability has a range of policy goals, including individual autonomy, access to data, and the broader societal promotion of consumer welfare, innovation, and competition, with panelists offering individual remarks: global initiatives (such as the Data Transfer Project); the sector-specific nature of portability; or particular risks arising from lack of uniform regulation in the United States. 

Ali Lange (Google) described the privacy and security protections in place for two existing data portability tools: Google Takeout, for individual exports and transfers of data across Google services; and the open-source Data Transfer Project, which is an open source initiative of big online platforms and service providers, including Apple, Microsoft, Google, Facebook and Twitter, which facilitates service-to-service data transfers. 

Several commenters offered perspectives on the sector-specific nature of portability. For example, Gabriel Nicholas (New York University) commented that the competitive effect of data portability is not uniform across sectors, arguing that the FTC should use alternative methods to promote competition. One key point he made was about “group portability”, and how portability could be very useful as a group function or even a group right, for example allowing a group of friends to move from a platform to another. Hodan Omaar (Center for Data Innovation) explained that there is a greater need for data portability in sectors where there is a greater disparity between the goals of organizations that hold data and the intent of the data subjects. 

Meanwhile, Pam Dixon (World Privacy Forum) outlined risks associated with onward transfers. Particularly in the healthcare sector, she observed that regulatory gaps exist in the United States for non-HIPAA covered entities. In contrast, she noted that the EU’s GDPR provides baseline data protection rules to prevent regulatory gaps. Peter Swire (Georgia Tech) argued in favor of keeping the theory of data portability in check with practical scenarios, pointing out that studying use cases in detail can be very helpful to understanding the limits and benefits of portability. For example, Professor Swire noted the concept of “multihoming” for data, a way to describe the fact that very often in practice portability is not about moving the data altogether from one provider to another, but simply about transferring a copy of the data to another service that the individual would like to try out. Thus, data portability might help create a place for users to have multiple “homes.”

 

Panel 4: Realizing Data Portability’s Potential: Material Challenges and Solutions

Panel 4, moderated by Jared Brown (FTC), considered the challenges of data security, privacy, standardization, and interoperability with a range of industry representatives (Mastercard, Digi.me), civil society (Mission:data Coalition), and consumer privacy advocates (Public Knowledge and Electronic Frontier Foundation). 

Overall, panelists agreed on the need to: promote trust among individuals; provide individuals with greater control over their data; and protect and use consumer data responsibly. According to Erika Brown Lee (Mastercard), a consumer-centric approach to data portability involves basing the data transfers between services on consumer requests (i.e., consent), and reducing friction for consumers. Brown Lee also noted that security is the critical challenge for companies offering data portability, referring to verification and authentication in particular. Security risks also vary depending on how data portability is implemented, with panelists agreeing that utilizing APIs for data transfers is generally better from a security perspective than the widespread practice of credential sharing (“screen scraping”). Julian Ranger noted that screen scraping has a greater potential for abuse, and Michael Murray (Mission:data coalition) referred to screen scraping as expensive, buggy, and inconsistent. 

Consumer privacy advocates emphasized the need for greater privacy protections for individuals and accountability for companies. Sara Collins (Public Knowledge) emphasized the important need for comprehensive federal privacy legislation to close regulatory gaps, and to set minimum standards. In addition, Bennett Cyphers (EFF) stated that to solve many of wider competition issues in the technology sector, regulation in areas other than data portability could incentivize data sharing. Discussing how responsibility and liability for downstream data should be allocated in the data portability context, Cyphers stated that liability should rest with an actor responsible for wrongdoing. However, in cases where data is shared without the consumer’s knowledge, he stated that liability ought to rest with the data transferrer. 

 

Moving Forward 

Many complex questions remain in terms of how to make data portability workable for organizations to implement in practice. In his keynote remarks for the FTC workshop, Peter Swire (FPF Senior Fellow) discussed his recently published article, “Portability and Other Transfers Impact Assessment (PORT-IA).” The Impact Assessment aims to provide a framework for multi-disciplinary experts to use in a particular data transfer context to assess issues of data portability, including what Swire refers to as “Other Required Transfers” (e.g., a transfer of an entire database). Due to the sectoral nature of the issues that arise involving data portability, there is unlikely to be a one-size-fits-all solution. However, as legal regimes and policymakers around the world increasingly conceptualize and implement portability and interoperability regimes, data portability tools and commercial practices will continue to develop in the data ecosystem. 

 

Thank you to Kai Koppoe, Hunter Dorwart and Veronica Alix for their contribution to this blog. 

The First National Model Student Data Privacy Agreement Launches

Protections for student data privacy took an important step forward this summer when the Student Data Privacy Consortium (SDPC) released the first model National Data Privacy Agreement (NDPA) for school districts to use with their technology service providers. Ever since education technology (edtech) emerged as a key tool in classrooms, both schools and edtech companies have struggled to create data privacy agreements (DPAs) that adequately protect student data and meet both schools’ and providers’ needs. DPAs provide crucial protections for student data by limiting its use and sharing. A key challenge in that process is that US federal student privacy law and many state laws require specific contractual clauses or protections. The new NDPA addresses this challenge by streamlining the education contracting process and, in the SDPC’s words, establishing “common expectations between schools/districts and marketplace providers.”

In this blog, we outline some of the major contracting challenges, the SDPC’s creation of the model contract, and the ways in which education stakeholders can use the NDPA to reduce the burden of contracting.

Data Privacy Agreements: Challenges for Schools and Edtech Companies

Schools and districts typically use hundreds of companies to provide services every school year. Edtech companies perform widely varying tasks for schools and districts, such as data storage, educational games, learning management systems, attendance tracking, and many other school functions. The privacy protections that each company must implement can vary based on the type and sensitivity of student data they hold and how it is collected, used, or shared. In addition, as 43 states have passed more than 130 student privacy laws in the past five years, districts and edtech companies must incorporate significant legal obligations into their agreements or contracts. Districts also want to ensure that companies limit their student data collection and use.

However, contracting individually with each service provider to ensure this protection is often extremely difficult for both districts and companies. Each DPA might require different levels of sharing, control, and protections regarding student data. And few districts can hire teams of technologists and lawyers to understand and negotiate student data privacy with each of their service providers. As a result, districts across the country have sought ways to ensure trust and facilitate the DPA process.

Edtech companies face similar challenges as they bring important services to schools and districts. Because contracts involving student data must align with federal and state laws and address each district’s needs and concerns, edtech companies find it challenging and expensive to negotiate DPAs with school districts at scale. Often, schools and districts require DPAs that are unique to their institution but might contain requirements that are not appropriate for every edtech company; for example, a company that deletes data after it is no longer needed (to improve data security) would be unable to comply with a contractual requirement to return all student data after the company provides its service. Larger edtech companies might have teams of lawyers who can work with clients on these issues, but many smaller companies without legal departments may sign contracts they know they cannot observe because they lack the resources or leverage to incorporate language that fits the realities of their business.

These challenges have forced both school districts and companies to dedicate significant time and resources to DPAs each year. To alleviate schools’ contracting challenges and strengthen their knowledge and bargaining power with edtech companies, a group of Massachusetts school districts created the SDPC in 2015. The organization now has school district alliances in 29 states.

How the SDPC Developed the NDPA

SDPC’s alliances have created influential state-wide agreements in several states, such as California and Massachusetts. In recent years, other state alliances have increasingly adapted versions of the California and Massachusetts agreements in their own states to ease the burdens of contracting with their service providers. As a result of the growing popularity of the state-level DPAs and calls to establish baseline agreements for its alliances, SDPC formed a working group to create a national set of standards that align as much as possible with current laws, requirements, and edtech business practices.

For more than a year, the SDPC working group––made up of attorneys, district officials, company representatives, and nonprofit organizations, including FPF––met to create the NDPA. The goal was to improve the language from the state agreements and to create a balanced starting point for negotiations between schools and companies nationwide. The result is a contract that will likely save time and money for schools, districts, and edtech companies, ultimately benefiting students by allowing schools and districts to allocate more resources to learning and less to negotiating.

The working group attempted to balance substantive changes and create a structure that makes the contract easier to use than past state-level versions. The group addressed provisions regarding data breaches, sub-processor restrictions, advertising limits, de-identified data use, and other issues, reflecting multiple compromises between districts and companies. The working group also added sections that will allow districts and edtech companies to easily include information that was challenging in past DPAs, such as descriptions of companies’ services, terms that address state law, and changes to standard terms. Over the next year, SDPC’s state-level alliances will create state-specific clauses allowing them to incorporate unique requirements from their state laws into the NDPA.

Ongoing Revision and Stakeholder Participation

The SDPC’s NDPA contains terms that align with most US state and federal student privacy laws. Nonetheless, not every provision in the agreement will be perfect for every school, district, or company. Districts and companies concerned about specific provisions will still benefit from using the NDPA because it will set a common starting point for districts’ and companies’ discussions of data privacy. 

While a significant step forward, the NDPA has room for improvement. No matter how well-vetted, a resource such as this can cause unintended consequences, and stakeholders should not hesitate to identify and communicate them. For example, transactional challenges remain regarding how the NDPA will fit within the structure of a service agreement between schools and companies, especially since the NDPA has some overlapping and, therefore, redundant definitions: “third party” “operator,” and “provider,” for instance, all refer to the company signing the contract.

There are also substantive provisions that may cause issues down the road: the audit clause, for example, could allow multiple districts to simultaneously audit one company. Although the NDPA improves prior language about breach notifications, it does not explicitly define a data breach, thus introducing potential ambiguity about when a notification is required. Furthermore, the NDPA might impact the national edtech market by creating a barrier to entry for smaller companies, which often have little opportunity to negotiate inapplicable or unnecessarily burdensome provisions. SDPC has said that it will continue to develop the NDPA, so feedback about the agreement’s challenges and strengths will be necessary and useful.

Adopting the NDPA will benefit schools, districts, and companies because it establishes a common baseline for these stakeholders to begin student data privacy negotiations. Even if it does not apply perfectly to everyone, the agreement will save contracting resources for all education stakeholders. This efficiency, in turn, will allow schools and districts to better protect student data as they provide important services and education for American students.

Data Protection Expert Agnes Bundy Scanlan and Fundraising and Philanthropy Leader Elaine Laughlin Join Board of Directors

FPF is pleased to announce two new Directors, Agnes Bundy Scanlan and Elaine Laughlin.

Agnes Bundy Scanlan is a global leader in the governance, law, regulatory and compliance risk, and data information security fields. Agnes is also an experienced legal counsel and advisor to industry and government. She has built a sterling reputation as being proactive, accessible and responsive to governance and regulatory changes, and has received recognition within the corporate world as an expert and leader in these fields. Her background includes contributions in the business community as well as on Capitol Hill and in the Executive Branch.

Agnes is widely respected in the data protection field for her legal knowledge, integrity, and practical guidance,” said FPF CEO Jules Polonetsky. “Her experience as one of the founders of the professional data protection field will greatly benefit our board and our community.”

Agnes is currently the president of the Cambridge Group, LLC which provides regulatory risk consulting, operational expertise on consumer banking regulations, interim staff augmentation, training guidance, and more. Agnes also currently serves as an Independent Director for Truist where she is a member of the Board Governance and Nominating, Risk, and Trust Committees.

Along with her roles at both Cambridge Group, LLC and Truist, Anges also serves on the Advisory Board for the Tech, Law, and Security Program at American University College of Law. Here, she tackles the challenges and opportunities posed by emerging technology to ensure that technology is used to support the functioning of effective democracies and core democratic values, including privacy, security, freedom of speech, and the discernment of truth.

Prior to her time at the Cambridge Group, Ms. Bundy Scanlan served as a senior adviser for Treliant Risk Advisors where she counseled financial services firms on various matters including strategy, governance, regulatory and risk management matters. She also served as a senior adviser at Treliant from 2012 to 2015.

She holds a J.D. from Georgetown University Law Center and a B.A. from Smith College. She is a member of the Bar of the Supreme Court of the United States, the Bar of the Commonwealth of Massachusetts, the Bar of the Commonwealth of Pennsylvania, and the Bar of the Superior Court of the District of Columbia. She was also formerly the chair of the International Association of Privacy Professionals.

Elaine Laughlin is a proven leader in fundraising and philanthropy. She has been a leader for various public broadcasting outlets over the course of her career. During her time at WETA, Elaine served as the Vice President of Foundation and Government Development. She was instrumental in the production of programs such as PBS NewsHour, Washington Week, the documentaries of Ken Burns, Henry Louis Gates Jr.’s Finding Your Roots and many other public broadcasting favorites.

Elaine is currently the Director of Development at WSBE, southeastern New England’s PBS station. She leads development for the station and is committed to delivering programs and services that educate, inform, enrich, inspire and entertain viewers of all ages.

“We’re already tapping Elaine’s expertise as FPF continues to expand and reach new audiences,” said Polonetsky. “We’re excited to have her knowledge about how growing organizations can deliver a range of impactful programs.”

She holds a B.A. in English from Wellesley College and a Master’s in Communication from Cornell University.

Congratulations to both Agnes and Elaine on joining the Board of Directors!

Learn more about the FPF Board here.

Five Top of Mind Data Protection Recommendations for Brain-Computer Interfaces

By Jeremy Greenberg, [email protected] and Katelyn Ringrose [email protected]

Key FPF-curated background resources – policy & regulatory documents, academic papers, and technical analyses regarding brain-computer interfaces are available here

Recently, Elon Musk livestreamed an update for Neuralink—his startup centered around creating brain-computer interfaces (BCIs). BCIs are an umbrella term for devices that detect, amplify, and translate brain activity into usable neuroinformation. At the event, Musk unveiled his newest BCI, implanted in the brain of a pig.

Musk predicts that future BCIs will not only “read” individual’s brainwaves, but also “write” information into the user’s brain to accomplish such goals as identifying medical issues and allowing users with limited movement to type using their thoughts. Musk even mentioned the long-term prospect of downloading memories. Explaining Neuralink’s newest device, “In a lot of ways,” Musk said, “It’s kind of like a Fitbit in your skull, with tiny wires.”

It can be hard to separate the facts from the hype surrounding cutting edge product announcements. But it is clear that brain-computer interfaces are increasingly used by hospitals, schools, individuals, and others for a range of purposes. It is equally clear that BCIs often use sensitive personal data and create data protection risks. 

Below, we explain how BCI technologies work, how BCIs are used today, some of the technical challenges associated with implementing the technologies, and the data protection risks they can create. An upcoming FPF paper examines these issues at greater length. 

We conclude with five important recommendations for developers intent on maximizing the utility and minimizing the risks of BCIs:

(1) Employ Privacy Enhancing Technologies to Safeguard Data;

(2) Ensure On/Off User Controls;

(3) Enshrine Purpose Limitation;

(4) Focus on Data Quality; and

(5) Promote Security.

What are Brain-Computer Interfaces? //

Some BCIs are “invasive” or “semi-invasive”—implanted into a user’s brain or installed on the brain surface—like Neuralink. But many others are non-invasive, commonly utilizing external electrodes, which do not require surgery. The three main types of BCIs are: 

1). Invasive BCIs, which are installed directly into the wearer’s brain, are typically used in the medical context. For example, clinical implants have been used to improve patients’ motor skills. Invasive implants can include devices like an electrode array called a Utah array, and new inventions like neural dust and neural lace which drape over or are inserted into multiple areas within the brain. 

2). Semi-invasive BCIs are often installed on top of the brain, rather than into the brain itself. Such BCIs rely on electrocorticography (ECoG), in which electrodes are attached on the exposed surface of the brain to measure electrical activity of the cerebral cortex. ECoG is most widely used for managing epilepsy. The Neuralink device is promoted as a coin-sized semi-invasive implant that attaches to the surface of a user’s brain and sends signals to an external device.

3.) Non-invasive BCIs typically rely on neuroinformation gathered from electroencephalography (EEG). EEG is a common method for recording electrical activity, with electrodes placed on the scalp to measure neural activity. Other non-invasive techniques use brain stimulation. For example, transcranial direct current stimulation (tDCS) sends low level currents to the frontal lobes. While non-invasive BCIs might be an attractive option for headset integration, “non-invasive” is not synonymous with harmless. BCIs are a relatively new technology with the potential for health, privacy, and security risks. 

Individuals may be uneasy about devices that can read a user’s thoughts, or alter the composition of these thoughts. However, it is important to note that today’s BCIs do not read or modify thoughts—instead, they rely on machine learning algorithms that have been trained to recognize brain activity in the form of electrical impulses and make inferences about emotional states, actions, and expressions. 

Regardless of the technique used, collecting and processing brain signals to derive useful neuroinformation can be a challenging process. Most data derived via BCIs is noisy (especially in the case of non-invasive applications), and creating computer systems that can identify and remove noise is a complex and cumbersome undertaking. After actionable signals are gathered, various artificial intelligence and machine learning models are applied to extract and classify useful neuroinformation. The final task is to accurately translate and match neuroinformation to the desired outcome or action—a process researchers are still attempting to master.

The Benefits of BCI Technologies & Top of Mind Privacy and Security Risks //

BCIs are, and will continue to be, deployed in doctor’s offices, schools, workplaces, and within our homes and communities. As a novel interface, BCIs hold incredible promise across numerous sectors, from health to gaming. However, each sectoral use comes with a host of unique privacy and security risks.

In the health and wellness sector, BCIs are used to monitor fatigue, restore vision and hearing, and control accessibility devices like wheelchairs, as well as help power prosthetic limbs. The propensity of BCIs to not only read brain signals but to potentially stimulate activity in the brain can create benefits for patients. In the diagnostic and treatment arena, BCIs lessen and sometimes eliminate the need for subjective patient responses. In one NIH-funded study, BCIs were used to detect glaucoma progression over time using objective measurements made by the BCI; this represents a potentially substantial diagnostic improvement over traditional glaucoma assessment, which relied on subjective patient-generated data. In the accessibility space, BCIs have spun off a new generation of neuroprosthetics, or artificial limbs that move in response to patients’ thoughts, and have aided in the creation of BCI-powered wheelchairs

While health-related BCIs are promising treatments for some patients, they can be vulnerable to security breaches. Recently, researchers showed that hackers, through imperceptible noise variations of an EEG signal, could force BCIs to spell out certain words. According to the researchers, the consequence of this security vulnerability could range from user frustration to a severe misdiagnosis.

In the gaming context, non-invasive wearables outfitted with EEG electrodes provide players with the ability to play games and control in-game objects using their thoughts. Games such as The Adventures of Neuroboy allow players to move objects in the game using their thoughts, which are measured through an EEG-fitted cap. Companies like Neurable are looking to push the limits of user interaction even further by developing AR/VR headsets outfitted with EEG electrodes that collect brainwave data—this data acts as the primary driver of gameplay. In Neurable’s first demo, Awakening, the player assumes the role of a psychokinetically-gifted child who must escape from a government prison. Through reading the player’s electrical brain impulses, the BCI lets the player choose between a host of objects to escape from prison and advance though the game.

BCIs can make games more immersive for players and give game developers novel tools. But Advances in immersive gaming depend on the collection of neuroinformation, which can lead to heightened privacy risks. Existing immersive games in the AR/VR space often rely on collecting and processing potentially sensitive personal information such as geolocation data and biometric data like the player’s gait and eye movement, as well as audio and video recordings of the player. Future-looking gaming hardware, such as the headset being developed by Neurable, could pair neuroinformation wit vast sets of sensitive personal information, which increases the chances of user identifiability, while potentially revealing sensitive biological information about the player. This data could be used for a range of commercial purposes, from product improvement and personalization to behavioral advertising and profiling.

BCIs can also be found in schools, where companies claim they can measure student attentiveness. For example, BrainCo, Inc. is developing BCI technology that involves students wearing EEG-fitted headbands in class. The students’ neuroinformation is gathered and displayed on a teacher’s dashboard which allegedly provides insight into student attention levels. In the future, BCIs might be deployed in the education arena to aid students with learning disabilities; some schools already employ AR and VR technologies for this purpose. While personalized education metrics can be helpful to students, parents, and teachers, inaccurate BCI data in the education context could lead to false conclusions about student aptitude, and accurate information could put students at risk of disproportionate penalties for inattentiveness or other behavior. 

Measuring attentiveness through the use of BCIs is not unique to the education space. Currently, most of the uses of BCIs in the workplace are proposed as ways to measure engagement and improve employee performance during high-risk tasks. BCIs deployed in the employment context can raise the risks of employer surveillance and discrimination (e.g., data about workers’ emotions could lead to penalties, firing decisions, and other actions).

One of the most future-looking applications of BCIs is in the smart cities and communities space. As early as 2014, researchers proposed a prototype for a bluetooth-enabled BCI that could help disabled individuals control and direct a smart car over short distances, increasing individuals’ independence. These prototypes, while still very much a work in progress, open the possibility for more complex uses in the future. The potential benefits are substantial, but these technologies also create risks, including the collection and use of drivers’ neuroinformation in combination with other sensitive data, such as location, when controlling the vehicle. Additionally, mind-controlled cars pose the potential public safety risk of driving a car using often imprecise, opaque, and sometimes inaccurate neuroinformation.

In addition to sector-specific privacy risks, BCIs are generally susceptible to the same drawbacks and potential harms associated with other algorithmic processes. For example, harmful bias, a lack of transparency and accountability, as well as a reliance on faulty training data, can lead to individual and collective losses in opportunity. Implementing accurate autonomous systems presents its own set of challenges such as: whether a particular system is appropriate to achieve a desired outcome; whether the systems are designed (and re-designed) to reduce bias; and whether the system raises ethical or legal risks.

As a novel interface, BCIs raise important data protection questions that should be addressed throughout their development cycle. Below, we put forward just a handful of the high-level recommendations that developers should adhere to when seeking to create inclusive, and privacy-centric, brain-computer interfaces. 

Key Recommendations for BCI Development //

Because the collection and use of neuroinformation involves a number of privacy and ethical concerns that go beyond current laws and regulations, stakeholders working in this emerging field should follow these principles for mitigating privacy risks:

(1) Employ Privacy Enhancing Technologies to Safeguard Data BCI providers should integrate recent advances in privacy enhancing technologies (PETs), such as differential privacy, in accordance with principles of data minimization and privacy by design.

(2) Ensure On/Off User Controls Wherever appropriate, BCI users should have the option to control when their devices are on or off. Some devices may need to always be on in order to fulfill their functions—for example, a BCI that treats a neurological condition. However, when being always on is not an essential feature of the device, users should have a clear and definite way to turn off their device. As with other devices, there are considerable privacy risks when a BCI is always gathering data or can be turned on unintentionally.

(3) Enshrine Purpose Limitation BCI providers should state the purpose for collecting neuroinformation and refrain from using that information for any other purpose absent user consent. For example, if an educational BCI gauges student attentiveness for the purpose of helping a teacher engage the class, it should not use attentiveness data for another purpose—like ranking student performance—without express and informed consent. Additionally, BCI providers should also consider limiting unnecessary cross-device collection.

(4) Focus on Data QualityProviders should strive to use the most accurate data collection processes and machine-learning tools available to ensure accuracy and precision. Algorithmic explainability and reproducibility of results are critical components of accuracy. It is important for BCIs to be both accurate (turning neural signals into correct neuroinformation) and precise (consistently reading the same signals to mean the same thing).

(5) Promote Security BCI providers should take appropriate measures to secure neuroinformation. BCI devices should be secure against hacking and malware, and company servers should be secure against unauthorized access and tampering. Furthermore, data transfers should be accomplished by secure means, subject to strong encryption. 

Moving Forward //

The Future of Privacy Forum is working with stakeholders to better analyze these issues and make recommendations regarding appropriate data protections for brain-computer interfaces. Curated news, resources, and academic papers on BCIs and related topics are available here. We welcome your thoughts and feedback at [email protected] and [email protected].

Additional Resources //

In addition to the resources we curated here; we found the following academic papers, white papers, ethical frameworks, and policy frameworks helpful to understanding the current BCI landscape. 

Academic Papers //

App Stores for the Brain: Privacy & Security in Brain-Computer Interfaces—by Tamara Bonaci, Ryan Calo, and Howard Chizeck—noting that BCI-enabled technology carries a great potential to improve and enhance the quality of human lives, while also identifying some of the key privacy and security risks associated with BCIs.

Four Ethical Priorities for Neurotechnologies and AI—by Rafael Yuste, et. al—noting that BCI technology can exacerbate social inequalities and offer corporations, hackers, governments and others new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency and an understanding of individuals as entities bound by their bodies.

Keeping Disability in Mind: A Case Study in Implantable Brain-Computer Interface Research—by Laura Specker Sullivan, Eran Klein, Tim Brown, and Matthew Sample—noting that developers espouse functional, assistive goals for their technology, but often note uncertainty in what degree of function is “good enough” for the individual end user. 

Towards New Human Rights in the Age of Neuroscience and Neurotechnology—by Marcello Ienca and Roberto Andorno—assessing the implications of emerging neurotechnology applications in the context of human rights frameworks and suggests that existing human rights may not be sufficient to respond to these emerging issues.

White Papers //

iHuman: Blurring Lines between Mind and Machine—by The Royal Society—arguing that neural interface technologies will continue to raise profound ethical, political, social and commercial questions that should be addressed as soon as possible to create mechanisms to approve, regulate or control the technologies as they develop, as well as managing the impact they may have on society.

Policy Frameworks //

IEEE Neuroethics Framework: Addressing the Ethical, Legal and Social Implications of Neurotechnology—by IEEE—a developing matrix intended to act as a “living document” that will evolve with new BCI technology and new ethical, legal, and social issues, ideas, and perspectives. 

OECD Recommendation on Responsible Innovation in Neurotechnology—adopted by the OECD Council in December 2019—this recommendation is the first international standard in this domain. It aims to guide governments and innovators to anticipate and address the ethical, legal and social challenges raised by novel neurotechnologies while promoting innovation in the field.

Standards Roadmap: Neurotechnologies for Machine Interfacing—available through IEEE—noting the need for standards in the BCI arena and providing an overview of the existing and developing standards in the field of neurotechnologies for brain‐machine interfaces. The roadmap is broken into five man topics: (1) the science behind sensing technologies, (2) feedback mechanisms, (3) data management, (4) user needs, and (5) performance assessments of BCIs.

Image courtesy of Pixabay, available here

FPF Submits Comments Regarding Data Protection & COVID-19 Ahead of National Committee on Vital and Health Statistics Hearing

Yesterday, FPF submitted comments to the National Committee on Vital and Health Statistics (NCVHS) ahead of a Virtual Hearing of the Subcommittee on Privacy, Confidentiality, and Security on September 14, 2020. 

The hearing will explore considerations for data collection and use during a public health emergency, in light of the deployment of new technologies for public health surveillance to tackle the nationwide COVID-19 pandemic. The Subcommittee intends to use input from expert comments and testimony to inform the development and dissemination of a toolkit outlining methods and approaches to collect, use, protect, and share data responsibly during a pandemic. 

In our comments, we highlighted FPF’s recent work exploring the ethical, privacy, and data protection challenges posed by the COVID-19 crisis, and we shared resources that address a number of issues raised by the Committee in the Request for Public Comments. In particular, we provided FPF resources that address: (1) the application of the Fair Information Practice Principles (FIPPs) and proper scope of data collection, analysis, and sharing in an emergency; (2) differences in standards at the local, state, and federal levels; and (3) technical understanding of location data and the design of mobile apps.

In recent months, FPF’s Privacy and Pandemics Series has convened public health experts, academics, advocates, representatives of industry, and other experts to discuss how to create frameworks to safeguard the responsible use of data while creating and employing new tools, such as contact tracing apps. FPF has also developed educational resources, such as an infographic to demonstrate how mobile devices interpret signals from their surroundings, including GPS satellites, cell towers, Wi-Fi networks, and Bluetooth, to generate precise location measurements. We have also explored the differing standards that are arising at the state, federal, and local levels for how to respond to a public health emergency while protecting privacy and personal data. Addressing the proper scope of data collection, analysis, sharing and retention in an emergency, FPF also recently testified before the U.S. Senate Committee on Commerce, Science, & Transportation; and has also provided input at a Public Work Session of the Washington State Senate Committee on Environment, Energy & Technology.

By informing policymakers about the risks and regulatory gaps associated with location, health, wellness, and other data collected and used during a public health emergency, we hope to promote informed decision-making and regulation. We look forward to continuing to provide resources on the federal and state level to legislators and public health authorities on the responsible and ethical use of data in the fight against COVID-19.

FPF Presents @ RightsCon 2020: “Frontiers in health data privacy: navigating blurred expectations across the patient-consumer spectrum”

The patient-consumer spectrum is a growing concept in which healthcare is rapidly transitioning from a periodic activity in fixed, traditional health care settings to an around-the-clock activity that involves the generation, use, and integration of data reflecting many aspects of individuals’ lives and behaviors. Accompanying this spectrum are blurred distinctions between traditional versus consumer-generated health information and differences in expectations of how health information across this spectrum should be protected or treated.

On July 27, 2020, during the RightsCon 2020 virtual conference, the Future of Privacy Forum’s (FPF’s) Health Policy Counsel and Lead, Dr. Rachele Hendricks-Sturrup sat down with three health data governance and policy expert panelists to explore the privacy and policy implications across the broadening patient-consumer spectrum:

An audience poll was taken to garner the panel audience’s perspectives regarding the privacy of consumer-generated versus traditional health care data. Just over half (52%) of the audience members who participated in the poll felt that the privacy of consumer-generated health data should be treated the same as traditional health care data:

 

 

 

 

 

 

These split results highlight the need to discuss data privacy and rights across the growing patient-consumer spectrum. The panelists took on this challenge and offered the following key takeaways:

Dr. Hendricks-Sturrup and the panelists concluded that, in order to successfully navigate blurred expectations of privacy across this spectrum and make progress toward establishing meaningful legal and policy frameworks and best practices, diverse stakeholders from industry, academia, and civil society must be engaged and barriers to their collaboration must be addressed.

Read the post-panel white paper here.

Watch the RightsCon panel below:

https://www.youtube.com/watch?v=fDjw4BVLFeo

To learn more about the FPF Health Initiative, contact Dr. Rachele Hendricks-Sturrup at [email protected].

 

Call for Position Statements on Responsible Uses of Technology and Health Data During Times of Crisis

Event Overview

The Future of Privacy Forum, in collaboration with the National Science Foundation, Duke Sanford School of Public Policy, SFI ADAPT Research Centre, Dublin City University, and Intel Corporation presents Privacy & Pandemics: Responsible Uses of Technology and Health Data During Times of Crisis — An International Tech and Data Conference, including a two day virtual workshop on October 27-28, 2020 to explore the value and limits of data and technology in the context of a global crisis. At 10 months into the COVID-19 pandemic, what role has tech and data played in combating the crisis, what have we learned about limitations of law, policy and technical tools, and what areas need reform and additional research?

REGISTER HERE

 

Call for Position Statements

We are soliciting position statements from leading technologists, scientists, policymakers, data experts, companies, and regulators to assess early conclusions about how data and technology have each played a role in efforts to study, control the spread of, and track COVID-19. 

This call invites experts with a perspective on areas such as:

We invite you to submit a 500-1000 word position statement to be considered for inclusion in the upcoming workshop. Authors of accepted submissions will be offered a $1,500 (US) stipend to participate in a relevant workshop session or invited to present a “firestarter” at the virtual event to be held October 27th and 28th 2020. Accepted submissions will be distributed to workshop attendees in advance for review, assessment, and discussion. The Planning Committee will also organize a number of invited presentations.

A workshop report will be prepared and used by the National Science Foundation to help set direction for the Convergence Accelerator 2021 Workshops, speeding the transition of convergence research into practice to address grand challenges of national importance. 

This event is sponsored in part by the NSF Convergence Accelerator and the NSF Secure and Trustworthy Cyberspace programs, Intel Corporation, the Privacy Tech Alliance, and OneTrust, LLC.

Position Statement Submission 

We are inviting submissions of position statements to catalyze conversations around the future of privacy and technology during times of crisis. We are seeking original, provocative, well-argued statements of approximately 500-1000 words.

We are interested in statements addressing specific, practical, identified challenges faced by academic researchers, public health experts and agencies, technologists, policymakers, industry and others who have played a role in responding to the COVID-19 crisis.

Works by undergraduate students, graduate students, or unaffiliated scholars, as well as from individuals with academic and/or corporate affiliations are all welcome. Works by interdisciplinary teams, specifically those representing the convergence of fields such as engineering, biology, social, and computer sciences are encouraged.

Submission Deadline

Submission of draft position statements: September 30, 2020. **DEADLINE CLOSED**

Submission Format

Reviewer Team

Review Process

Position statements will be reviewed by at least three reviewers, including a subject matter expert. Reviewers will determine if a position statement will move forward for final decision on inclusion in the workshop.

Additional Information

For more information on this effort, including submission instructions, event details, or other questions, please contact Christy Harris at [email protected].

California’s SB 980 Would Codify Strong Protections for Genetic Data

Author: John Verdi (Vice President of Policy)

This week, SB 980 (the “Genetic Information Privacy Act”) passed the California State Assembly and State Senate, with near unanimous support (54-10 and 39-0). If signed by the Governor before the Sept. 30 deadline, the law would become the first comprehensive genetic privacy law in the United States, establishing significant new protections for consumers of genetic services

As we previously wrote and testified, the Genetic Information Privacy Act incorporates many of the protections in FPF’s 2018 Privacy Best Practices for Consumer Genetic Testing Services. Those Best Practices were drafted and published over the course of 2018 in consultation with a multi-stakeholder group of technical experts, scientists, civil society advocates, leading consumer genetic and personal genomic testing companies, and with input from regulators including the Federal Trade Commission (FTC) and the Department of Health and Human Services (HHS). 

Leading genetic testing companies have adopted the Best Practices, making them enforceable by the FTC and state AGs; SB 980 would extend safeguards to users of other genetics companies, protecting consumers and building trust in the industry.

Below we describe 1) that process and results of FPF’s 2018 efforts; and 2) the significance of SB 980 as compared to existing laws and the voluntarily adopted Best Practices.

FPF’s 2018 Stakeholder Process//

In 2018, Future of Privacy Forum conferred with leading genetics services and other experts to explore ways to address consumer privacy concerns related to genetics services. At the time, concerns were emerging in response to the rapid growth of the consumer genetics industry, and highly publicized cases of law enforcement access to genetic data, including the Golden State Killer investigation.

As a non-profit dedicated to convening divergent stakeholders to create workable best practices for emerging technologies, we solicited and received the input of scientists, consumer privacy advocates, government stakeholders, and other experts. FPF published the resulting Best Practices at the end of July 2018.

Since that time, some, but not all, of the direct-to-consumer genetics companies have voluntarily adopted FPF’s Best Practices. Some companies have chosen not to adopt the Best Practices, or to adopt only certain provisions, while others are supportive but have chosen not to formally incorporate the provisions of the Best Practices into their policies. Privacy policies and other voluntary legal commitments can be enforced by the Federal Trade Commission and State Attorneys General.

Why SB 980 is Significant //

If signed by the Governor, SB 980 would be a landmark law for genetic privacy, going beyond existing federal and state laws as well as self-regulation. Although the federal Genetic Information Nondiscrimination Act (GINA) prohibits certain types of discrimination based on genetic information, it does not provide comprehensive privacy protections for the collection of such data or the many ways that it can be used, sold, or shared (including for advertising or law enforcement purposes).

Similarly, the handful of states that have heretofore addressed genetic information privacy have not established comprehensive protections. For example, some states have enacted “mini-GINAs” (including California), or extended its protections to discrimination in life insurance, disability, or long term care (Florida). Somes states have limited law enforcement access (Nevada), and at least one has attempted to take a more comprehensive approach while recognizing genetic information as the property of the consumer (Alaska).

In contrast, SB 980 would establish broad, comprehensive consumer protections for genetic information. The protections go significantly beyond those that exist for other types of personal information in California under the California Consumer Privacy Act (CCPA), an approach that is justified given the unique sensitivity of genetic information. In particular, genetic information has the ability to reveal intimate information about health and familial connections, and is challenging to de-identify. The bill also contains certain aspects that are unique to the consumer genetics industry, such as the requirement that biological samples be destroyed upon request.

Similarly, SB 980 would go beyond FPF’s Best Practices by directly regulating the entire sector, rather than only the companies that have voluntarily chosen to adopt the Best Practices. Furthermore, although voluntary commitments can be enforced by the Federal Trade Commission (FTC) and others, such enforcement is necessarily limited to unfair and deceptive trade practices, and does not always allow for financial penalties. In contrast, SB 980 would establish civil penalties of up to $1,000 (for negligent violations) or $10,000 (for willful violations)

Penalties could add up quickly, as they are calculated on a per violation, per consumer basis. 

Conclusion // 

Genetic information carries the potential to empower consumers interested in learning about their health and heritage, and to fuel unparalleled discoveries in personalized medicine and genetic research. Given the Future of Privacy Forum’s mission to convene divergent stakeholders towards workable privacy practices for emerging technologies, it continues to be rewarding to play a role in shaping the leading practices for consumer genetic information. We are optimistic that SB 980 represents a major step forward for consumer rights.

Additional FPF Resources // 

  1. California SB 980 Would Codify Many of FPF’s Best Practices for Consumer Genetic Testing Services, but Key Differences Remain (2020); 
  2. FPF Releases Follow-Up Report on Consumer Genetics Companies and Practice of Transparency Reporting (2020);
  3. A Closer Look at Genetic Data Privacy and Nondiscrimination in 2020 (2020);
  4. FPF and Privacy Analytics Identify “A Practical Path Toward Genetic Privacy” (2020);
  5. Consumer Genetic Testing: A Q&A with Carson Martinez (2019).

How the Student Privacy Pledge Bolsters Legal Requirements and Supports Better Privacy in Education

The Student Privacy Pledge is a public and legally enforceable statement by edtech companies to safeguard student privacy, built around a dozen privacy commitments regarding the collection, maintenance, use, and sharing of student personal information. Since it was introduced in 2014 by the Future of Privacy Forum (FPF) and the Software and Information Industry Association, more than 400 edtech companies have signed the Pledge. In 2015, the White House endorsed the Pledge, and it has influenced company practices, school policies, and lawmakers’ approaches to regulating student privacy. Many school districts use the Pledge as they review prospective vendors, and it is aligned with—and has broader coverage than—the most widely-adopted state student privacy law.* 

We are proud that 436 companies have signed the Pledge, with well over 1,000 applications since June 2016. FPF reviews each applicant’s privacy policy and terms of service to ensure that signatories’ public statements align with the Pledge commitments. If an applicant’s policies do not align with the Pledge, we work with the company to bring them into line with the Pledge. We also work with applicants to ensure that they understand the commitments they are making when they become a Pledge signatory. This process may result in the applicant bringing internal compliance and legal resources to bear in ways they previously did not—an effect that increases accountability. Nearly every company that applies ends up altering their privacy practices and/or policies to become a Pledge signatory. 

The Student Privacy Pledge is a voluntary promise, not a law that applies to everyone. But once companies sign the Pledge, the Federal Trade Commission (FTC) and state Attorneys General (AG) have legal authority to ensure they keep their promises. The FTC and state AGs have a track record of using public commitments like the Student Privacy Pledge as tools to enforce companies’ privacy promises. These legal claims arise from the intersection between Pledge promises and Consumer Protection Unfair & Deceptive Acts & Practices (UDAP) statutes—without the Pledge commitments, it would be much more difficult for the FTC and AGs to bring enforcement actions under state and federal UDAP laws. In the absence of a comprehensive federal consumer privacy law, the Pledge provides an important and unique means for privacy enforcement; it complements state and federal student privacy laws that directly regulate companies and schools. 

In addition to enforcement at the federal and state levels, many schools require vendors to adhere to contracts that are modeled after or heavily mirror the Pledge, and schools have contractual rights to enforce these promises. If companies are found to break their commitments, schools can force a vendor to change practices, terminate contracts with the company, and sue for damages. 

When FPF learns of a complaint about a Pledge signatory, we analyze the issue and reach out to the signatory to understand the complaint, the signatory’s policies and practices, and other relevant information. We typically work with the company to resolve any pledge-covered practices that do not align with the Pledge. We seek to bring signatories into compliance with the Pledge rather than remove them as signatories – an action that could result in fewer privacy protections for users, as a former signatory would not be bound by the Pledge’s promises for future activities.

One of the most common misunderstandings about the Pledge is the assumption that the Pledge applies to all products offered by a signatory or used by a student. However, the Student Privacy Pledge applies to “school service providers”—companies that design and market their services and devices for use in schools. When a company offering services to both school audiences and general audiences becomes a Pledge signatory, the Pledge commitments only apply to the services they provide to schools. Companies selling tools or providing services to the general public are not obligated to redesign these products because they sign the Pledge. This is consistent with most state student privacy laws and proposed federal bills. 

The Pledge is by no means a replacement for pragmatic updates to existing student privacy laws and regulations, or for a comprehensive federal privacy law that would cover all consumers, which FPF supports. The Pledge is a set of commitments intended to build transparency and trust by obligating signatories to make baseline commitments about student privacy that can be enforced by the Federal Trade Commission and state attorneys general. The Pledge is not intended to be a comprehensive privacy policy nor to be inclusive of all the many requirements necessary for compliance with applicable federal and state laws. With that said, most signatories take the Pledge because they wish to be thoughtful and conscientious about privacy.

In 2019, we decided to analyze the Pledge in light of the evolving education ecosystem, incorporating what we’ve learned from reviewing thousands of edtech privacy policies and engaging directly with stakeholders and reviewing the 130 state laws that have passed since the Pledge was created. We are excited to apply this knowledge to the Student Privacy Pledge with Student Privacy Pledge 2020, being released this fall.

* The Student Privacy Pledge’s commitments are echoed in the most commonly passed student privacy law aimed at edtech service providers, California’s Student Online Personal Information Protection Act (Cal. Bus. & Prof. Code § 22584). A version of this law has been enacted by several  states, including: Arizona (Ariz. Rev. Stat. § 15-1046), Arkansas (Ark. Code Ann. § 6-18-109), Connecticut (Conn. Gen. Stat. §§ 10-234bb-234dd), Delaware (Del. Code. Ann. tit. 14, §§ 8101), Georgia (Ga. Stat. § 20-2-660), Hawaii (Hi. Rev. Stat. §§ 302A-499-500), Iowa (Iowa Code § 279.70), Kansas (K.S.A. 72-6312), Maine (Me. Rev. St. Ann. 20-A § 951), Maryland (Md. Educ. Code § 4-131), Michigan (Mich. Comp. Laws § 388.1295), Nebraska (Neb. Rev. Stat. § 79-2,153), New Hampshire (NH. St. § 189:68-a), New Jersey (Assembly Bill 4978, signed into law Jan. 2020), North Carolina (N.C. Gen. Stat. § 115C-401.2), Oregon (Or. Rev. Stat. Ann § 336.184-187)Texas (Tex. Educ. Code § 32.151), Virginia (Va. Code Ann. § 22.1-289.01), and Washington State (Wash. Rev. Code § 28A.604).