BCIs & Data Protection in Healthcare: Data Flows, Risks, and Regulations
This post is the second in a four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.
Click here for FPF and IBM’s full report: Privacy and the Connected Mind. In case you missed it, read the first blog post in this series, which unpacks BCI technology. Additionally, FPF-curated resources, including policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are here.
I. Introduction: What are BCIs?
BCIs are computer-based systems that directly record, process, or analyze brain-specific neurodata and translate these data into outputs. Those outputs can be used as visualizations or aggregates for interpretation and reporting purposes and/or as commands to control external interfaces, influence behaviors or modulate neural activity. BCIs can be broadly divided into three categories: 1) those that record brain activity; 2) those that modulate brain activity; or 3) those that do both, also called bi-directional BCIs (BBCIs).
BCIs can be invasive or non-invasive and employ a number of techniques for collecting neurodata and modulating neural signals. Neurodata is data generated by the nervous system, which consists of the electrical activities between neurons or proxies of this activity. This neurodata may be “personal neurodata” if it is reasonably linkable to an individual.
II. Health-related BCIs Diagnose Medical Conditions, Modulate Brain Activity for Cognitive Disorder Management, and Promote Accessibility
Facilitating Diagnoses: BCIs can be used to help make certain diagnoses by providing a means for practitioners to quantify fatigue, identify depression, and measure stress. Diagnostic BCIs can also assist even when a patient is unable to provide responses. These situations may occur when patients experience disorders of consciousness, such as locked-in syndrome, whereby individuals are fully conscious but unable to move, speak, or explain how they are feeling. Additionally, current research efforts focus on BCI applications that diagnose the stage and advancement of progressive conditions, such as glaucoma.
Modulating the Brain to Treat or Overcome Conditions: While diagnosis typically involves simply recording brain activity, other health-related BCI uses may actively modulate patients’ brains and nervous systems. For example, brain modulation can be used to disrupt seizures for epilepsy patients. Recent advances in interventive BCI modulation include a vision restoration study in which the image bypasses the eye and the optic nerve in order to feed directly to the brain—resulting in low-resolution vision capabilities.
Improving Accessibility and Rehabilitation Opportunities: The latest prosthetic limbs (i.e., neuroprosthetics) rely on BCIs, which enable the limbs to move in response to thought stimuli. Examples of this BCI application include robotic arms, as well as BCI-powered automatic wheelchairs. User control over neuroprosthetics and personal devices are operated by BCIs collecting neurodata about intended limb movements or an activity associated with what the user wants to do. An example of the latter involves users thinking of physical activities like “eating,” rather than specific words like “table,” to direct their chair to a nearby object. BCIs can also act as the channel for providing haptic feedback or haptic sensory replacement within prosthetics and exoskeletons for purposes of patient rehabilitation, regaining sensation, and an increased ability for patients to perform previously inaccessible tasks.
There are also efforts to connect BCIs with smart devices and the Internet of things (IoT), which could provide individuals experiencing neurological disorders or motor impairments with greater independence in the ability to perform daily living activities. These efforts could improve or sustain a user’s quality of life through increased accessibility within their home environment.
Beyond Medicine – BCIs and Commercial Wellness: BCIs are also starting to emerge in the commercial wellness space as a method of personal data tracking, intended as a means of improving cognitive abilities (such as attention) and/or mental and physical health (such as sleep monitoring). Many of these wellness BCIs overlap with functions included in the gaming and toy space. The NeuroSky Mindwave Mobile 2: Brainwave Starter Kit provides the user with information about their brain’s electrical impulses when relaxing and when listening to music. The product includes an EEG-fitted headband and connects to companion apps via Bluetooth. The device also provides training games purported to help improve meditation, attention, and enhance the user’s learning effectiveness. Further, the device includes tools for players to create their own brain-training games.
III. Health-related BCI Risks Include Security Breaches, Infringement on Mental Privacy, and Data Inaccuracy
Security Breaches: Security breaches are some of the most prominent risks in the health BCI space. Like other technology-based medical devices, BCIs are vulnerable to cyber risks. Researchers recently showed that hackers, through imperceptible noise variations of an EEG signal, could force BCIs to spell out certain words that do not align with the wearer’s actual thoughts or intentions. The consequence of these security vulnerabilities can range from user frustration to severe misdiagnosis and physical harm. Breaches of BCIs may also compromise sensitive health information that could be captured or inadvertently shared.
BCI Accuracy: An equally important risk among health-related BCIs is the extent to which device accuracy is verifiable and sufficient. In many applications, high reliability of medical BCIs is critical because inaccurate interpretation or modulation of a patient’s brain could result in serious consequences, including death. Patients relying on modulating BCIs to help mitigate cognitive disorders, such as epilepsy, could suffer grave health consequences if the BCI failed to work as intended and anticipated. Risks are particularly acute when patients rely on BCIs to communicate crucial information, such as their choices regarding treatment or even end-of-life decisions. Accuracy is also crucial to reliable, continuous accessibility, as prosthetic limbs, wheelchairs, and other devices controlled via BCIs must operate correctly and safely according to users’ intentions.
Infringement on Mental Privacy and BCI-informed Decision Making: Finally, BCIs also present privacy risks. These risks refer to unauthorized access to personal information, including the inferences drawn from an individual’s conscious or unconscious behaviors and intentions. In addition to the existing privacy risks around all personal health data, BCIs raise new mental privacy risks due to the capacity of the neural networks underpinning many of these devices to associate certain thoughts and the ability of BCIs to define and interpret subconscious or causally-connected intentions on a wider scale. For example, a BCI-controlled wheelchair and its underlying neural network might not only deduce that the user is thinking about food, therefore directing the chair to move toward the table, but also draw other conclusions about the individual’s biology and preferences, such as whether or not an individual is hungry or thirsty and at what times. These additional inferences capture new information about an individual’s thoughts, intentions, or interests, many of which are related to an individual’s specific biology and unique preferences.
Privacy risks are magnified when these new inferences are combined with other personal information to make decisions that impact the person’s life, potentially without their knowledge or consent. Organizations collecting and processing brain signals, leading to granular inferences tied to an individual, could have incentives to repurpose this data for unrequested treatments or non-medical purposes, many of which may expose potentially sensitive biological information to third parties. Additionally, the sharing of patient data associated with BCI use could potentially disclose an individual’s medical condition to employers, private companies, public entities, or governments.
IV. Some Health BCIs are Subject to Common Rule Requirements, FCC Oversight, or International Frameworks
Common Rule: Some of the advancements in health BCIs involve human subject research, which is governed by a complex regulatory framework. U.S. researchers whose projects are federally funded are typically required to obtain subjects’ informed consent for data collection based on approval from a Common Rule-based Institutional Review Board (IRB) prior to undertaking studies.
FCC Oversight: Wireless IoT BCI devices are likely subject to Federal Communications Commission (FCC) oversight because of their designation as connected wearables. However, given the lack of regulations around consumer wellness technologies, devices marketed outside of the physician regulated context—such as brain training games and meditation-aiding devices—may lack strict oversight. For example, the Health Insurance Portability and Accountability Act (HIPAA) regulates covered entities such as physicians and health insurers that collect, use, process, and share health information, but does not usually apply to wellness device companies.
International Frameworks: In Europe, the Global Data Protection Regulation (GDPR) is the applicable framework for any processing of personal data for the purposes of scientific research, including where the research relies on special categories of personal data, such as data related to health, and biometric data processed for identification. There are several lawful grounds for processing under Article 6(1) that would allow the necessary processing of personal data for BCI research, as well as several permissions under Article 9(2) for the use of sensitive personal data. In some situations, this could allow data controllers to conduct this type of research even without individual consent for the processing of the data, specifically when sensitive data is necessary for public health purposes or for research in the public interest; however, there are many complexities surrounding this sort of processing, with the European Data Protection Board (EDPB) expected to adopt Guidelines on processing of personal data for scientific research purposes in the near future. Given the complexities surrounding privacy in human subject research, health researchers and other stakeholders seeking to develop or adopt BCIs must understand and verify how the product fits into this shifting regulatory landscape.
The EU’s recently proposed draft AI regulation covers all AI systems, including those relying on biometric data—and is likely to be relevant for future regulation of personal neurodata, significantly altering the regulatory landscape around BCIs and neurotech. It specifically focuses on AI systems that pose high risks to individuals’ “health, safety and fundamental rights.” BCIs that might be considered “high risk” AI systems under the proposed regulation could trigger requirements prior to entering the market, such as going through a conformity assessment, adoption of adequate risk assessment, security guarantees, and adequate notice to the user, among others. If considered a “low risk” system, organizations would still have to fulfill transparency requirements. The full scope and impact of the EU’s AI regulation on the development and use of BCIs remains subject to the ongoing legislative process.
V. Conclusion
Health BCIs are set to influence and potentially improve healthcare by expanding accessibility and rehabilitation opportunities, as well as by giving medical practitioners new ways to diagnose and treat conditions. However, these applications are not without risk. The data flows that underpin medical BCIs raise privacy considerations, as well as risks in regard to how neurodata is secured and whether such data is accurate. Companies dealing with medical BCIs must remain abreast of these challenges and analyze how medical BCIs interact with a dynamic, global body of regulation.
Understanding why the first pieces fell in the transatlantic transfers domino
The Austrian DPA and the EDPS decided EU websites placing US cookies breach international data transfer rules
Two decisions issued by Data Protection Authorities (DPAs) in Europe and published in the second week of January 2022 found that two websites, one run by a contractor of the European Parliament (EP), and the other one by an Austrian company, have unlawfully transferred personal data to the US merely by placing cookies (Google Analytics and Stripe) provided by two US-based companies on the devices of their visitors. Both decisions looked into the transfers safeguards put in place by the controllers (the legal entities responsible for the websites), and found them to be either insufficient – in the case against the EP, or ineffective – in the Austrian case.
Both decisions affirm that all transfers of personal data from the EU to the US need “supplemental measures” on top of their Article 46 GDPR safeguards, in the absence of an adequacy decision and under the current US legal framework for government access to personal data for national security purposes, as assessed by the Court of Justice of the EU in its 2020 Schrems II judgment. Moreover, the Austrian case indicates that in order to be effective, the supplemental measures adduced to safeguard transfers to the US must “eliminate the possibility of surveillance and access [to the personal data] by US intelligence agencies”, seemingly putting to rest the idea of the “risk based approach” in international data transfers post-Schrems II.
This piece analyzes the two cases comparatively, considering they have many similarities other than their timing: they both target widely used cookies (Google Analytics, in addition to Stripe in the EP case), they both stem from complaints where individuals are represented by the Austrian NGO noyb, and it is possible that they will be followed by similar decisions from the other DPAs that received a batch of 101 complaints in August 2020 from the same NGO, relying on identical legal arguments and very similar facts. This piece analyzes the most important findings made by the two regulators, showing how their analyses were in sync and how these analyses likely preface similar decisions for the rest of the complaints.
1.“Personal data” is being “processed” through cookies, even if users are not identified and even if the cookies are thought to be “inactive”
In the first decision, the European Data Protection Supervisor (EDPS) investigated a complaint made by several Members of the European Parliament against a website made available by the EP to its Members and staff in the context of managing COVID-19 testing. The complainants raised concerns with regard to transfers of their personal data to the US through cookies provided by US based companies (Google and Stripe) and placed on their devices when accessing the COVID-19 testing website. The case was brought under the Data Protection Regulation for EU Institutions (EUDPR), which has identical definitions and overwhelmingly similar rules to the GDPR.
One of the key issues that was analyzed in order for the case to be considered falling under the scope of the EUDPR was whether personal data was being processed through the website by merely placing cookies on the devices of those who accessed it. Relying on its 2016 Guidelines on the protection of personal data processed through Web Services, the EDPS noted in the decision that “tracking cookies, such as the Stripe and Google Analytics cookies, are considered personal data, even if the traditional identity parameters of the tracked users are unknown or have been deleted by the tracker after collection”. It also noted that “all records containing identifiers that can be used to single out users, are considered as personal data under the Regulation and must be treated and protected as such”.
The EP argued in one of its submissions to the regulator that the Stripe cookie “had never been active, since registration for testing for EU Staff and Members did not require any form of payment”. However, the EP also confirmed that the dedicated COVID-19 testing website, which was built by its contractor, copied code from another website run by the same contractor, and “the parts copied included the code for a cookie from Stripe that was used for online payment for users” of the other website. In its decision, the EDPS highlighted that “upon installation on the device, a cookie cannot be considered ‘inactive’. Every time a user visited [the website], personal data was transferred to Stripe through the Stripe cookie, which contained an identifier. (…) Whether Stripe further processed the data transferred through the cookie is not relevant”.
With regard to the Google Analytics cookies, the EDPS only notes that the EP (as controller) acknowledged that the cookies “are designed to process ‘online identifiers, including cookie identifiers, internet protocol addresses and device identifiers’ as well as ‘client identifiers’”. The regulator concluded that personal data were therefore transferred “through the above-mentioned trackers”.
In the second decision, which concerned the use of Google Analytics by a website owned by an Austrian company and targeting Austrian users, the DPA argued in more detail what led it to find that personal data was being processed by the website through Google Analytics cookies, under the GDPR.
1.1 Cookie identification numbers, by themselves, are personal data
The DPA found that the cookies contained identification numbers, including a UNIX timestamp at the end, which shows when a cookie was set. It also noted that the cookies were placed either on the device or the browser of the complainant. The DPA affirmed that relying on these identification numbers makes it possible for both the website and Google Analytics “to distinguish website visitors … and also to obtain information as to whether the visitor is new or returning”.
In its legal analysis, the DPA noted that “an interference with the fundamental right to data protection … already exists if certain entities take measures – in this case, the assignment of such identification numbers – to individualize website visitors”. Analyzing the “identifiability” component of the definition of “personal data” in the GDPR, and relying on its Recital 26, as well as on Article 29 Working Party Opinion 4/2007 on the concept of “personal data”, the DPA clarified that “a standard of identifiability to the effect that it must also be immediately possible to associate such identification numbers with a specific natural person – in particular with the name of the complainant – is not required” for data thus processed to be considered “personal data”.
The DPA also recalled that “a digital footprint, which allows devices and subsequently the specific user to be clearly individualized, constitutes personal data”. The DPA concluded that the identification numbers contained in the cookies placed on the complainant’s device or browser are personal data, highlighting their “uniqueness”, their ability to single out specific individuals and rebutting specifically the argument the respondents made that no means are in fact used to link these numbers to the identity of the complainant.
1.2 Cookie identification numbers combined with other elements are additional personal data
However, the DPA did not stop here and continued at length in the following sections of the decision to underline why placing the cookies at issue when accessing the website constitutes processing of personal data. It noted that the classification as personal data “becomes even more apparent if one takes into account that the identification numbers can be combined with other elements”, like the address and HTML title of the website and the subpages visited by the complainant; information about the browser, operating system, screen resolution, language selection and the date and time of the website visit; the IP address of the device used by the complainant. The DPA considers that “the complainant’s digital footprint is made even more unique following such a combination [of data points]”.
The “anonymization function of the IP address” – which is a function that Google Analytics provides to users if they wish to activate it – was expressly set aside by the DPA, considering that during fact finding it was shown the function was not correctly implemented by the website at the time of the complaint. However, later in the decision, with regard to the same function and the fact that it was not implemented by the website, the regulator noted that “the IP address is in any case only one of many pieces of the puzzle of the complainant’s digital footprint”, hinting therefore that even if the function would have been correctly implemented, it wouldn’t have necessarily led to the conclusion that the data being processed was not personal.
1.3 Controllers and other persons “with lawful means and justifiable effort” will count for the identifiability test
Drilling down even more on the notion of “identifiability” in a dedicated section of the decision, the DPA highlights that in order for the data processed through the cookies at issue to be personal, “it is not necessary that the respondents can establish a personal reference on their own, i.e. that all information required for identification is with them. […] Rather, it is sufficient that anyone, with lawful means and justifiable effort, can establish this personal reference”. Therefore, the DPA took the position that “not only the means of the controller [the website in this case] are to be taken into account in the question of identifiability, but also those of ‘another person’”.
After recalling that the CJEU repeatedly found that “the scope of application of the GDPR is to be understood very broadly” (e.g. C-439/19B, C-434/16Nowak, C-553/07Rijkeboer), the DPA nonetheless stated that in its opinion, the term “anyone” it referred to above, and thus the scope of the definition of personal data, “should not be interpreted so broadly that any unknown actor could theoretically have special knowledge to establish a reference; this would lead to almost any information falling within the scope of application of the GDPR and a demarcation from non-personal data would become difficult or even impossible”.
This being said, the DPA considers that the “decisive factor is whether identifiability can be established with a justifiable and reasonable effort”. In the case at hand, the DPA considers that there are “certain actors who possess special knowledge that makes it possible to establish a reference to the complainant and identify him”. These actors are, from the DPA’s point of view, certainly the provider of the Google Analytics service and, possibly the US authorities in the national security area. As for the provider of Google Analytics, the DPA highlights that, first of all, the complainant was logged in with his Google account at the time of visiting the website.
The DPA indicates this is a relevant fact only “if one takes the view that the online identifiers cited above must be assignable to a certain ‘face’”. The DPA finds that such an assignment to a specific individual is in any case possible in the case at hand. As such, the DPA states that: “[…] if the identifiability of a website visitor depends only on whether certain declarations of intent are made in the account (user’s Google account – our note), then, from a technical point of view, all possibilities of identifiability are present”, since, as noted by the DPA, otherwise Google “could not comply with a user’s wishes expressed in the account settings for ‘personalization’ of the advertising information received”. It is not immediately clear how the ad preferences expressed by a user in their personal account are linked to the processing of data for Google Analytics (and thus website traffic measurement) purposes, and it seems that this was used in the argumentation to substantiate the claim that the second respondent generally has additional knowledge across its various services that could lead to the identification or the singling out of the website visitor.
However, following the arguments of the DPA, on top of the autonomous finding that cookie identification numbers are personal data, it seems that even if the complainant wouldn’t have been logged into his account, the data processed through the Google Analytics cookies would have still been considered personal. In this context, the DPA “expressly” notes that “the wording of Article 4(1) of the GDPR is unambiguous and is linked to the ability to identify and not to whether identification is ultimately carried out”.
Moreover, “irrespective of the second respondent” – so even if Google admittedly did not have any possibility or ability to render the complainant identifiable or to single him out, other third parties in this case were considered to have the potential ability to identify the complainant: US authorities.
1.4 Additional information potentially available to US intelligence authorities, taken into account for the identifiability test
Lastly, according to the decision, the US authorities in the national security area “must be taken into account” when assessing the potential of identifiability of the data processed through cookies in this case. The DPA considers that “intelligence services in the US take certain online identifiers, such as the IP address or unique identification numbers, as a starting point for monitoring individuals. In particular, it cannot be ruled out that intelligence services have already collected information with the help of which the data transmitted here can be traced back to the person of the complainant.”
To show that this is not merely a “theoretical danger”, the DPA relies on the findings of the CJEU in Schrems II with regard to the US legal framework and the “access possibilities” it offers to authorities, and on Google’s Transparency Report, “which proves that data requests are made to [it] by US authorities.” The regulator further decided that even if it is admittedly not possible for the website to check whether such access requests are made in individual cases and with regard to the visitors of the website, “this circumstance cannot be held against affected persons, such as the complainant. Thus, it was ultimately the first respondent as the website operator who, despite publication of the Schrems II judgment, continued to use the Google Analytics tool”.
Therefore, based on the findings of the Austrian DPA in this case, at least two of the “any persons” mentioned in Recital 26 GDPR that will be considered when deciding who can have lawful means to identify data so that the data is deemed personal are the processor of a specific processing operation, as well as the national security authorities that may have access to that data, at least in cases where this access is relevant (like in international data transfers). This latter finding of the DPA raises questions whether national security agencies in general in a specific jurisdiction may be considered by DPAs as an actor who has “lawful means” and additional knowledge when deciding if a data set links to an “identifiable” person, also in cases where international data transfers are not at issue.
The DPA concluded that the data processed by the Google Analytics cookies is personal data and falls under the scope of the GDPR. Importantly, the cookie identification numbers were found to be personal data by themselves. Additionally, the other data elements potentially collected through cookies together with the identification numbers are also personal data.
2.Data transfers to the US are taking place by placing cookies provided by US-based companies on EU-based websites
Once the supervisory authorities established that the data processed through Google Analytics and, respectively, Stripe cookies, were personal data and were covered by the GDPR or EUDPR respectively, they had to ascertain whether an international transfer of personal data from the EU to the US was taking place in order to see whether the provisions relevant to international data transfers were applicable.
The EDPS was again concise. It stated that because the personal data were processed by two entities located in the US (Stripe and Google LLC) on the EP website, “personal data processed through them were transferred to the US”. The regulator strengthened its finding by stating that this conclusion “is reinforced by the circumstances highlighted by the complainants, according to which all data collected through Google Analytics is hosted (i.e. stored and further processed) in the US”. For this particular finding, the EDPS referred, under footnote 27 of the decision, to the proceedings in Austria “regarding the use of Google Analytics in the context of the 101 complaints filed by noyb on the transfer of data to the US when using Google Analytics”, in an evident indication that the supervisory authorities are coordinating their actions.
In turn, the Austrian DPA applied the criteria laid out by the EDPB in its draft Guidelines 5/2021 on the relationship between the scope of Article 3 and Chapter V GDPR, and found that all the conditions are met. The administrator of the website is the controller and it is based in Austria, and, as data exporter, it “disclosed personal data of the complainant by proactively implementing the Google Analytics tool on its website and as a direct result of this implementation, among other things, a data transfer to the second respondent to the US took place”. The DPA also noted that the second respondent, in its capacity as processor and data importer, is located in the US. Hence, Chapter V of the GDPR and its rules for international data transfers are applicable in this case.
However, it should also be highlighted that, as part of fact finding in this case, the Austrian DPA noted that the version of Google Analytics subject to this case was provided by Google LLC (based in the US) until the end of April 2021. Therefore, for the facts of the case which occurred in August 2020, the relevant processor and eventual data importer was Google LLC. But the DPA also noted that since the end of April 2021, Google Analytics has been provided by Google Ireland Limited (based in Ireland).
One important question that remains for future cases is whether, under these circumstances, the DPA would find that an international data transfer occurred, considering the criteria laid out in the draft EDPB Guidelines 5/2021, which specifically require (at least in the draft version, currently subject to public consultation) that “the data importer is located in a third country”, without any further specifications related to corporate structures or location of the means of processing.
2.1 In the absence of an adequacy decision, all data transfers to the US based on “additional safeguards”, like SCCs, need supplementary measures
After establishing that international data transfers occurred from the EU to the US in the cases at hand, the DPAs assessed the lawful ground for transfers used.
The EDPS noted that EU institutions and bodies “must remain in control and take informed decisions when selecting processors and allowing transfers of personal data outside the EEA”. It followed that, absent an adequacy decision, they “may transfer personal data to a third country only if appropriate safeguards are provided, and on condition that enforceable data subject rights and effective legal remedies for data subjects are available”. Noting that the use of Standard Contractual Clauses (SCCs) or another transfer tool do not substitute individual case-by-case assessments that must be carried out in accordance with the Schrems II judgment, the EDPS stated that EU institutions and bodies must carry out such assessments “before any transfer is made”, and, where necessary, they must implement supplemental measures in addition to the transfer tool.
The EDPS recalled some of the key findings of the CJEU in Schrems II, in particular the fact that “the level of protection of personal data in the US was problematic in view of the lack of proportionality caused by mass surveillance programs based on Section 702 of the Foreign Intelligence Surveillance Act (FISA) and Executive Order (EO) 12333 read in conjunction with Presidential Policy Directive (PPD) 28 and the lack of effective remedies in the US essentially equivalent to those required by Article 47 of the Charter”.
Significantly, the supervisory authority then affirmed that “transfers of personal data to the US can only take place if they are framed by effective supplementary measures in order to ensure an essentially equivalent level of protection for the personal data transferred”. Since the EP did not provide any evidence or documentation about supplementary measures being used on top of the SCCs it referred to in the privacy notice on the website, the EDPS found the transfers to the US to be unlawful.
Similarly, the Austrian DPA in its decision recalled that the CJEU “already dealt” with the legal framework in the US in its Schrems II judgment, as based on the same three legal acts (Section 702 FISA, EO 12333, PPD 28). The DPA merely noted that “it is evident that the second respondent (Google LLC – our note) qualifies as a provider of electronic communications services” within the meaning of FISA Section 702. Therefore, it has “an obligation to provide personally identifiable information to US authorities pursuant to 50 US Code §1881a”. Again, the DPA relied on Google’s Transparency Report to show that “such requests are also regularly made to it by US authorities”.
Considering the legal framework in the US as assessed by the CJEU, just like the EDPS did, the Austrian DPA also concluded that the mere entering into SCCs with a data importer in the US cannot be assumed to ensure an adequate level of protection. Therefore, “the data transfer at issue cannot be based solely on the standard data protection clauses concluded between the respondents”. Hence, supplementary measures must be adduced on top of the SCCs. The Austrian DPA relied significantly on the EDPB Recommendation 1/2020 on measures that supplement transfer tools when analyzing the available supplementary measures put in place by the respondents.
2.2 Supplementary measures must “eliminate the possibility of access” of the government to the data, in order to be effective
When analyzing the various measures put in place to safeguard the personal data being transferred, the DPA wanted to ascertain “whether the additional measures taken by the second respondent close the legal protection gaps identified in the CJEU [Schrems II] ruling – i.e. the access and monitoring possibilities of US intelligence services”. Setting this as a target, it went on to analyze the individual measures proposed.
The contractual and organizational supplementary measures considered in the case:
notification of the data subject about data requests (should this be permissible at all in individual cases),
the publication of a transparency report,
the publication of guidelines “for handling government requests”,
careful consideration of any data requests.
The DPA considered that “it is not discernable” to what extent these measures are effective to close the protection gap, taking into account that the CJEU found in the Schrems II judgment that even “permissible (i.e. legal under US law) requests from US intelligence agencies are not compatible with the fundamental right to data protection under Article 8 of the EU Charter of Fundamental Rights”.
The technical supplementary measures considered were:
the protection of communications between Google services,
the protection of data in transit between data centers,
the protection of communications between users and websites,
“on-site security”,
encryption technologies, for example encryption of data at rest in data centers,
processing pseudonymous personal data.
With regard to encryption as one of the supplementary measures being used, the DPA took into account that a data importer covered by Section 702 FISA, as is the case in the current decision, “has a direct obligation to provide access to or surrender such data”. The DPA considered that “this obligation may expressly extend to the cryptographic keys without which the data cannot be read”. Therefore, it seems that as long as the keys are kept by the data importer and the importer is subject to the US law assessed by the CJEU in Schrems II (FISA Section 702, EO 12333, PPD 28), encryption will not be considered sufficient.
As for the argument that the personal data being processed through Google Analytics is “pseudonymous” data, the DPA rejected it relying on findings made by the Conference of German DPAs that the use of cookie IDs, advertising IDs, and unique user IDs does not constitute pseudonymization under the GDPR, since these identifiers “are used to make the individuals distinguishable and addressable”, and not to “disguise or delete the identifying data so that data subjects can no longer be addressed” – which the Conference considers to be one of the purposes of pseudonymization.
Overall, the DPA found that the technical measures proposed were not enough because the respondents did not comprehensively explain (therefore, the respondents had the burden of proof) to what extent these measures “actually prevent or restrict the access possibilities of US intelligence services on the basis of US law”.
With this finding, highlighted also in the operative part of the decision, the DPA seems to de facto reject the “risk based approach” to international data transfers, which has been specifically invoked during the proceedings. This is a theory according to which, for a transfer to be lawful in the absence of an adequacy decision, it is sufficient to prove the likelihood of the government accessing personal data transferred on the basis of additional safeguards is minimal or reduced in practice for a specific transfer, regardless of the broad authority that the government has under the relevant legal framework to access that data and regardless of the lack of effective redress.
The Austrian DPA is technically taking the view that it is not sufficient to reduce the risk of access to data in practice, as long as the possibility to access personal data on the basis of US law is actually not prevented, or in other words, not eliminated. This conclusion is apparent also from the language used in the operative part of the decision, where the DPA summarizes its findings as such: “the measures taken in addition to the SCCs … are not effective because they do not eliminate the possibility of surveillance and access by US intelligence agencies”.
If other DPAs confirm this approach for transfers from the EU to the US in their decisions, the list of potentially effective supplemental measures for transfers of personal data to the US will remain minimal – prima facie, it seems that nothing short of anonymization (per the GDPR standard) or any other technical measure that will effectively and physically eliminate the possibility of accessing personal data by US national security authorities will suffice under this approach.
A key reminder here is that the list of supplementary measures detailed in the EDPB Recommendation concerns all international data transfers based on additional safeguards, to all third countries in general, in the absence of an adequacy decision. In the decision summarized here, the supplementary measures found to be ineffective concern their ability to cover “gaps” in the level of data protection of the US legal framework, as resulting from findings of the CJEU with regard to three specific legal acts (FISA Section 702, EO 12333 and PPD 28). Therefore, the supplementary measures discussed and their assessment may be different for transfers to another jurisdiction.
2.3 Are data importers liable for the lawfulness of the data transfer?
One of the most consequential findings of the Austrian DPA that may have an impact on international data transfers cases moving forward is that “the requirements of Chapter V of the GDPR must be complied with by the data exporter, but not by the data importer” – therefore, under this interpretation, the organizations that are on the receiving end of a data transfer, at least when they are a processor for the data exporter like in the present case, cannot be found in breach of the international data transfers obligations under the GDPR. The main argument used was that “the second respondent (as data importer) does not disclose the personal data of the complainant, but (only) receives them”. As a result, Google was found not to breach Article 44 GDPR in this case.
However, the DPA did consider that it is necessary to look further, and as part of separate proceedings, into how the second respondent complied with its obligations as a data processor, and in particular the obligation to process personal data on documented instructions from the controller, including with regard to transfers of personal data to a third country or an international organization, as detailed in Article 28(3)(a) and Article 29 GDPR.
3.Sanctions and consequences: Between preemptive deletion of cookies, reprimands and blocking transfers
Another commonality of the two decisions summarized is that neither of them resulted in a fine. The EDPS issued a reprimand against the European Parliament for several breaches of the EUDPR, including those related to international data transfers “due to its reliance on the Standard Contractual Clauses in the absence of a demonstration that data subjects’ personal data transferred to the US were provided an essential equivalent level of protection”. It is significant to mention that the EP asked the website service provider to disable both Google Analytics and Stripe cookies in a matter of days after being contacted by the complainants on October 27, 2020. The cookies at issue were active between September 30, when the website became available, and November 4, 2020.
In turn, the Austrian DPA found that “the Google Analytics tool (at least in the version of August 14, 2020) can thus not be used in compliance with the requirements of Chapter V GDPR”. However, as discussed above, the DPA found that only the website operator – as the data exporter – was in breach of Article 44 GDPR. The DPA decided not to issue a fine in this case.
However, the DPA pursues to impose a ban on the data transfers or a similar order against the website, with some procedural complications. In the middle of the proceedings, the Austrian company that was in charge of managing the website transferred the responsibility of operating it to a company based in Germany, therefore the website is not under its control any longer. But since the DPA noted that Google Analytics continued to be implemented on the website at the time of the decision, it resolved to refer the case to the competent German supervisory authority with regard to the possible use of remedial powers against the new operator.
Therefore, it seems that stopping the transfer of personal data to the US without appropriate safeguards seems to be the focus in these cases, rather than sanctioning the data exporters. The parties have the possibility to challenge both decisions before their respective competent Court and require a judicial review within a limited period of time, but there are no indications yet whether this will happen.
4. The big picture: 101 complaints and collaboration among DPAs
The decision published by the Austrian DPA is the first one in the 101 complaints that noyb submitted directly to 14 DPAs across Europe (EU and the European Economic Area) at the same time in August 2020, from Malta, to Poland, to Lichtenstein, with identical legal arguments centered on international data transfers to the US through the use of Google Analytics or Facebook Connect, and all against websites of local or national relevance – so most likely these complaints will be considered outside the One-Stop-Shop mechanism.
The bulk of the 101 complaints were submitted to the Austrian DPA (about 50), either immediately under its competence, as in the analyzed case, or as part of the One-Stop-Shop mechanism where the Austrian DPA acts as the concerned DPA from the jurisdiction where the complainant resides, which likely needed to forward the cases to the many lead DPAs in the jurisdictions were the targeted websites have their establishment. This way, even more DPAs will have to make a decision in these cases – from Cyprus, to Greece, to Sweden, Romania and many more. About a month after the identical 101 complaints were submitted, the EDPB decided to create a taskforce to “analyse the matter and ensure a close cooperation among the members of the Board”.
In contrast, the complaint against the European Parliament was not part of this set, it was submitted separately at a later date to the EDPS, but relying on similar arguments on the issue of international data transfers to the US through Google Analytics and Stripe cookies. Even if it was not part of the 101 complaints, it is clear that the authorities indeed cooperated or communicated, with the EDPS making a direct reference to the Austrian proceedings, as shown above.
In other signs of cooperation, both the Dutch DPA and the Danish DPA have published notices immediately after the publication of the Austrian decision to alert organizations that they may soon issue new guidance in relation to the use of Google Analytics, specifically referring to the Austrian case. Of note, the Danish DPA highlighted that “as a result of the decision of the Austrian DPA” it is now “in doubt whether – and how – such tools can be used in accordance with data protection law, including the rules on transfers of personal data to third countries”. It also called for a common approach of DPAs on this issue: “it is essential that European regulators have a common interpretation of the rules”, since data protection law “intends to promote the internal market”.
In the end, the DPAs are applying findings from a judgment made by the CJEU, which has ultimate authority in the interpretation of EU law that must be applied across all EU Member States. All this indicates that it is likely a series of similar decisions will be successively published in the short to medium future, with small chances of seeing significant variations. This is why these two cases summarized here can be seen as the first two pieces that fell in a domino.
This domino, though, will not only be about the 101 cases and the specific cookies they target – it eventually concerns all US based service providers and businesses that receive personal data from the EU potentially covered by the broad reach of FISA Section 702 and EO 12333; all EU based organizations, from website operators, to businesses, schools, and public agencies, that use the services provided by the former or engage them as business partners, and disclose personal data to them; and it might as well affect all EU based businesses that have offices and subsidiaries in the US and that make personal data available to these entities.
5 Tips for Protecting Your Privacy Online
Today, almost everything we do online involves companies collecting personal information about us. Personal data is collected and regularly used for a number of reasons – like when you use social media accounts, when you shop online or redeem digital coupons at the store, or when you search the internet.
Sometimes, information is collected about you by one company, and then shared or sold to another. While data collection can offer benefits to both you and businesses – like connecting with friends, getting directions, or sales promotions – it can also be used in ways that are intrusive – unless you take control.
There are many ways you can protect your personal data and information and control how it is shared and used. On this Data Privacy Day – recognized annually on January 28 to mark the anniversary of Convention 108, the first binding international treaty to protect personal data– the Future of Privacy Forum and other organizations are raising awareness and promoting best practices for data privacy.
For the second year in a row, FPF is partnering with Snap to provide a privacy-themed Snap filter to spread awareness of the importance of data privacy to your networks. Scan the Snapcode below to check it out:
Share the pictures you took using our interactive lens on social media using the hashtag #FPFDataPrivacyDay2022.
You should know that there are steps you can take to better protect your privacy online. Below, we’ve listed five tips you can follow to better protect your privacy when using your mobile device.
1. Check Your Privacy Settings
Many social media sites include options on how you can tailor your privacy settings to limit the ways data is collected or used. Snap provides privacy options that control who can contact you, and many other options. Start with the Snap Privacy Center to review your settings. You can find those choices here.
Snap provides options for you to view any data they have collected about you, including the date your account was created and the devices that have access to your account. Downloading your data allows you to view the types of information that has been collected and modify your settings accordingly.
Instagram allows you to manage a variety of privacy settings, including who has access to your posts, who can comment on or like your post, and manage what happens to posts after you delete them. You can view and change your settings here.
TikTok allows you to decide between public and private accounts, decide which accounts can view posted videos, and allows you to change your personalized ad settings. You can check your settings here.
Twitter allows you to manage if they share your information with third-party businesses, if the site can track your internet browsing outside of Twitter, and allows you to choose if you’d like ads to be tailored to you. Check your settings here.
Facebook provides a range of privacy settings that can be found here.
What other apps do you use often? Check to see which settings they provide!
2. Limit Sharing of Location Data
Most social media sites will ask for access to your location data. Do they need it for some reason that is obvious, like helping you with directions or showing your nearby friends? Feel free to say no. And be aware that location data is often used to tailor ads and recommendations based on locations you have recently visited. Allowing access to location services may also permit the sharing of location information with third-parties.
Snap has a variety of ways to control who is able to view your location. On their settings page, you can select whether no one, just select users, or all friends will be able to view your location on Snap Map. You can also choose to deny individual users from viewing your location.
To check the location permissions allowed to social media sites on an iPhone or Android, follow the below steps.
Navigate to “Settings”, then “Location,” and then “App Permissions”
Select the social media app you’d like to prevent from accessing your location
Make sure “Don’t Allow” is selected or “Allow only while using the app”.
3. Keep Your Devices & Apps Up to Date
Keeping software current and up to date is the only way to make sure that your device is protected against the latest software vulnerabilities. Having the latest security software, web browser, and operating system installed is the best way to protect against various online threats. By enabling automatic updates on your devices, you can be sure that your apps and operating system are always up to date.
Users can check the status of their operating systems in the settings app. For iPhone users, navigate to “Software Update,” and for Android devices, look for the “Security” page in settings.
4. Use a Password Manager
Utilizing a strong and secure password for each web-based account you have helps ensure personal data and information are protected from unauthorized use. It can be difficult to remember complex passwords for every account and using a password manager can help. Password managers save passwords as you create and log in to your accounts, often alerting you of any duplicates and suggesting the creation of a stronger password. For example, when signing up for new accounts and services, if you use an Apple product, you can allow your iPhone, Mac, or iPad to generate strong passwords and safely store them in iCloud Keychain for later access. Some of the best third-party password managers can be found here.
5. Enable Two-Factor Authentication
Two-factor authentication adds an additional layer of protection to your accounts. The first authentication is the normal username and password combination that has been used for years. The second factor is either a text message or email including a code that is sent to a personal device. This added step makes it harder for malicious actors to gain access to your accounts. Two-factor authentication only adds a few seconds to your day, but can save you from the headache and harm that comes from compromised accounts. To be even safer, use an authenticator app as your second factor.
As many of us continue to work and learn remotely, it’s important to stay aware of the information you share on and offline. Remember to adjust your settings regularly, staying on top of any privacy changes and updates made on the web applications you use daily. Take charge of protecting your personal data and encourage others to look at the information they may be sharing. By adjusting your settings and making changes to your web accounts and devices, you can better maintain the security and privacy of your personal data.
If you’re interested in learning more about one of the topics discussed here or about other issues that are driving the future of privacy, sign up for our monthly briefing, check out one of our upcoming events, or follow us on Twitter and LinkedIn. FPF brings together some of the top minds in privacy to discuss how we can all benefit from the insights gained from data, while respecting the individual right to privacy.
Five Burning Questions (and Zero Predictions) for the U.S. State Privacy Landscape in 2022
Entering 2022, the United States remains one of the only major economic powers that lacks a comprehensive, national framework governing the collection and use of consumer data throughout the economy. An ongoing impasse in federal efforts to advance privacy legislation has created a vacuum that state lawmakers, seeking to secure privacy rights and protections for their constituents, are actively working to fill.
Last year we saw scores of comprehensive privacy bills introduced in dozens of states, though when the dust settled, only Virginia and Colorado had joined California in successfully enacting new privacy regimes. Now, at the outset of a new legislative calendar, many state legislatures are positioned to make progress on privacy legislation. While stakeholders are eager to learn which (if any) states will push new laws over the finish line, it remains too early in the lawmaking cycle to make such predictions with confidence. So instead, this post explores five key questions about the state privacy landscape that will determine whether 2022 proves to be a pivotal year for the protection of consumer data in the United States.
1. Will A Single (State) Framework Emerge Supreme?
A common refrain heard in the U.S. privacy debate is that each state creating its own data privacy rules threatens to create a confusing and costly “patchwork” of divergent laws. While some degree of tension between different state privacy laws is already baked into the landscape, regulated entities may be hoping that a particular regulatory approach emerges as an interoperable norm across the states. Some of the likely contenders for this title are laid out below.
California Model
California was the first mover on comprehensive privacy legislation, enacting the California Consumer Privacy Act (CCPA) in June 2018. At the time, many observers predicted that the “California effect” would establish the CCPA as a de-facto national standard and drive the adoption of similar laws throughout the nation (reminiscent of breach reporting statutes in the 2000s). True to form, 2019 and 2020 saw dozens of CCPA-style copycat bills introduced; however, no such bill has yet proven successful. One possible reason is that California’s approach to privacy has been something of a ‘moving target’ – having undergone multiple amendments, an extended Attorney General rulemaking process, the conversion of the CCPA into the California Privacy Rights Act (CPRA) by ballot initiative, and the recent launch of a new CPRA rulemaking process.
Virginia/Colorado Model
In 2021, a new challenger appeared with the enactment of the Virginia Consumer Data Protection Act (VCDPA) and the Colorado Privacy Act (CPA). While containing multiple important distinctions (that will be explored in a subsequent post), these laws generally adhere to the same basic framework for establishing consumer privacy rights and dividing business obligations between data “controllers” and “processors.” The Virginia/Colorado model also exceeds California in certain key areas, including by requiring affirmative consent for the processing of “sensitive” personal data. As a result, this framework could represent a more stable approach to protecting privacy than California that may be palatable to consumer and industry stakeholders alike.
Other Models
While California and the Virginia/Colorado models are the clear favorites, they are not the full field of contenders that could emerge as the dominant U.S. privacy framework. Last July, the Uniform Law Commission (ULC) finalized its model privacy law, the “Uniform Personal Data Protection Act,” which has already been introduced in the District of Columbia (CB 24-451), Nebraska (LB 1188), and Oklahoma (HB 3447). Notably, the ULC model significantly conflicts with established privacy frameworks and has received reactions ranging from skepticism to hostility from both industry and consumer advocacy groups, creating questions about its political viability.
There is also pending legislation in several states that, if enacted, would constitute distinct regulatory approaches from the adopted laws. For example, there are bills to watch in Massachusetts (S 46) (establishing fiduciary-style obligations on businesses); New Jersey (A 505) (including a ‘legitimate interest’ basis for data processing); and Oklahoma (HB 2969) (containing expansive use limitation requirements).
In surveying the state privacy bills introduced this year, a clear divide between the California and Colorado/Virginia frameworks is evident. State bills in Alaska (HB 222) and Indiana (HB 1261) include California-style rights for consumers to opt-out of the sale and sharing of personal information and to limit the use and disclosure of sensitive personal information. Elsewhere in Hawaii (SB 2797) and Pennsylvania (HB 2257), legislative proposals more closely follow the Virginia/Colorado approach to requiring affirmative consent for processing “sensitive data” in addition to creating opt-out rights for data sales, targeted advertising, and profiling.
2. Where Will Regulatory Processes Lead?
While much attention will be paid to the state legislative horse race, two states with laws on the books will undertake important privacy rulemaking processes this year. In California, the newly constituted California Privacy Protection Agency (CPPA) is directed to conduct a wide-ranging rulemaking that will clarify key definitions and compliance issues left open under the CPRA. Rulemaking subjects include the CPRA’s new right of correction, valid uses of data for ‘business purposes,’ and the application of the law to automated decision-making processes. In Colorado, the Attorney General has similarly been delegated broad rulemaking authority and is specifically tasked with the adoption of “rules that detail the technical specifications for one or more universal opt-out mechanisms” (discussed further below).
California and Colorado’s rulemaking processes will likely have significant impacts on the ultimate implementation and exercise of consumers’ new privacy rights in these states. Furthermore, while the CPRA and CPA statutes specifically direct the development of rules governing certain issues, their grants of rulemaking authority are open-ended, meaning that final regulations may potentially broaden the consumer rights and business compliance obligations established under these laws. However, such an expansive regulatory approach would likely be strongly contested. For example, the CPPA’s request for comment on preliminary rulemaking activity surfaced significant fault lines in stakeholder expectations for what CPRA rulemaking can and should entail for significant elements of the law.
Not all new state privacy laws will necessarily provide for open-ended rulemaking processes and Virginia’s privacy law lacks a rulemaking process entirely. Privacy bills under consideration in 2022 have largely followed an ‘all-or-nothing’ approach to rulemaking with legislation such as Maryland (SB 11) and Washington (HB 1850) seeking to give the state Attorney General or other regulators broad rulemaking authority and bills like Ohio (HB 376) providing for no rulemaking at all. Going forward, the inclusion of rulemaking authority in new privacy laws could create additional divergences between different state approaches. However, rulemaking may also help state laws remain flexible in light of changing technology and allow lawmakers to delegate some of the more nuanced technical issues to experts with the benefit of public participation.
3. How will State Activity Impact the Federal Debate?
Despite the introduction of over a dozen federal bills and numerous hearings since 2018, bipartisan federal collaboration on comprehensive privacy legislation has repeatedly stalled out. Key lawmakers remain divided over critical issues such as private rights of action, preemption, and how to regulate against discriminatory uses of data.
Advancements in privacy at the state level will likely breathe new life into the dormant federal debate – but its impact remains uncertain. One possibility is that the adoption of additional state privacy laws may ultimately create so much regulatory complexity for industry that breakthrough on federal privacy legislation becomes inevitable.
Alternatively, the enactment of even a single state law that contains a broad private right of action may push concerned industry stakeholders towards compromise over a federal privacy bill. Most industry participants view private lawsuits as particularly ‘Ill-Suited’ for the privacy context, and no state has yet enacted comprehensive privacy legislation providing for expansive private lawsuits. A range of approaches to the issue of private lawsuits have been taken in the legislation under consideration this year. In addition to bills that would establish expansive causes of action such as New York (S 6701) or explicitly disclaim such suits like Florida (SB 1864), some bills would restrict lawsuits to particular violations like Florida (HB 9) or permit lawsuits but restrict statutory damages such as Washington State (SB 5813).
Finally, the successful enactment of state privacy laws containing novel approaches to protecting privacy could inform new legislative proposals at the federal level. Given that the only states to enact comprehensive privacy laws have had (at the time) unified Democratic governments, the adoption of a privacy law by a Republican-led state could impact the contours of the federal conversation. Serious efforts to enact privacy legislation have been undertaken in Republican controlled state legislatures in Florida, Ohio, and Oklahoma, with more likely on the way.
4. Will ‘Universal’ Privacy Controls be the Next Big Thing?
Many stakeholders have expressed concern that leading privacy frameworks rely too heavily on individual controls and consent options that are overwhelming and unscalable for ordinary consumers in practice. One response to this criticism has been the development and legal recognition of ‘user-selected universal opt-out mechanisms,’ often exercised through browser settings or plug-ins, that signal a consumer’s request to exercise their privacy rights to the websites they visit. Under present law, such privacy controls are omitted from the VCDPA; recognized, but not clearly mandated under the CPRA; and will be required in Colorado come 2024.
As a newer approach to expressing privacy preferences, stakeholders have raised questions about the legal and practical effects that this class of ‘universal’ controls should carry. For example, how businesses should respond if they receive multiple, conflicting signals from different browsers or devices used by the same person. Furthermore, the potential development of separate processes governing the adoption of new signal mechanisms and likely state-by-state differences in the underlying privacy rights these controls will exercise could further complicate their use.
Nevertheless, ‘universal’ privacy controls represent a significant opportunity to advance consumer privacy interests and appear poised to become an increasingly prominent aspect of the privacy debate in the years to come. At present, the majority of active state bills would give businesses flexibility in determining context-appropriate methods for the exercise of consumers privacy rights including in Florida (SB 1864) and Kentucky (SB 15). However, bills in Maryland (SB 11) and Alaska (HB 159) would join Colorado in providing for the mandatory recognition of such signals.
5. Will Sectoral Privacy Laws Lead the Way?
This post has focused on ‘comprehensive’ privacy legislation, broad-based legal frameworks that would establish baseline, industry and technology neutral rules for the protection of personal data throughout a state’s economy. However, state lawmakers are also on track to propose hundreds of more narrowly focused privacy bills that would regulate either particular industries such as data brokers (Delaware HB 262) or ISPs (New York S 3885); categories of information such as childrens’ data (Washington State HB 1697) or biometrics (Kentucky HB 32); or establish specific business obligations such as reasonable security practices (West Virginia HB 2925) or transparency requirements (New Jersey A 1971). While some of these proposals are particularly narrow or limited in scope (for example, establishing a commission to study a particular issue), others could serve as both templates and catalysts for sweeping change in Americans’ privacy expectations and outcomes.
Conclusion
This commentary has noted several states where privacy legislation is already under serious consideration for the 2022 legislative calendar. However, the past informs us that fast-shifting local political dynamics can kick up surprises for state privacy efforts. Last year’s adoption of new privacy laws in Colorado and Virginia took many observers by surprise, and successful legislation may emerge from unexpected jurisdictions again this year. This post has posed many questions but can offer only one clear forecast: a turbulent and exciting year for consumer privacy legislation is just beginning. Be sure to follow the Future of Privacy Forum for updates on the U.S. privacy landscape throughout the year.
Addressing the Intersection of Civil Rights and Privacy: Federal Legislative Efforts
Last month, the National Telecommunications and Information Administration (NTIA) hosted virtual listening sessions on the intersection of data privacy, equity, and civil rights. Around the same time, the FTC announced that they will begin rulemaking on discriminatory practices in automated decision making, and currently, an influx of state legislation containing civil rights provisions have been introduced.
Decades of research demonstrate the effects of data processing on existing structural inequalities such as race, gender, and disability, and there have been numerous attempts by federal and state governments to regulate the disparate impacts of data practices on protected classes. Though the intersection of data privacy and civil rights has been discussed in policy circles for years, these bills containing civil rights provisions have been surprisingly under-analyzed.
In the coming weeks and months, FPF will be publishing a blog series to provide an informative overview of government efforts to regulate discriminatory data practices through proposed legislation and executive agency enforcement. This blog is the first in the series and will cover federal legislative efforts.
In sum:
In recent years, both Democrats and Republicans have introduced several comprehensive data privacy bills that would prohibit data processing that violates anti-discrimination laws. There is party division in areas of auditing/reporting burdens and enforcement.
There is also division on the scope of civil rights protections. While some proposals intend to apply data processing activities to what is prohibited under the existing federal anti-discrimination framework, others propose effectively expanding civil rights laws, such as expanding the definition of “protected classes” and extending public accommodation law (which has traditionally only applied to physical spaces) to online sellers of goods and services.
Some representatives and advocates remain concerned about the effects and enforcement of adtech and targeted advertising on marginalized and vulnerable populations.
Leading Federal Comprehensive Data Privacy Bills
Members of Congress have introduced a number of comprehensive data privacy bills in recent years, some of which contain civil rights provisions. The leading proposals from Democratic and Republican leaders in the Senate Commerce Committee are the Consumer Online Privacy Rights Act (COPRA) and the SAFE DATA (Setting an American Framework to Ensure Data Access, Transparency, and Accountability Act).
Table 1 (below) provides a helpful comparison of the key civil rights provisions in each bill. In general, COPRA contains more comprehensive civil rights provisions than the SAFE DATA Act, which mainly codifies unlawful data processing activities under federal anti-discrimination laws and permits the FTC to inform other agencies about potential violations.
Under COPRA, it would be unlawful to conduct discriminatory data processing in areas covered by federal anti-discrimination laws, such as housing, employment, and education, on the basis of a protected class. Protected classes would include those already protected under the law (race, sex, disability, etc.), as well as include new ones such as source of income, familial status, and biometric information. COPRA would also require entities to conduct impact assessments on the accuracy, bias, and potential discrimination of their algorithms. Violations of the law would be enforced through the FTC, state AGs, or through a private right of action, where a plaintiff could recover up to $1,000 per violation per day. Small businesses, however, would be exempt. In comparison (see Table 1), the SAFE DATA Act contains few civil rights provisions.
Table 1.
COPRA, Section 108
SAFE DATA, Section 201
Discrimination Provisions
A covered entity shall not process or transfer covered data on the basis of [protected class] for the purpose of:
(A) advertising, marketing, soliciting, offering, selling, leasing, licensing, renting, or otherwise commercially contracting for a housing, employment, credit, or education opportunity, in a manner that unlawfully discriminates against or otherwise makes the opportunity unavailable to the individual or class of individuals; OR
(B) in a manner that unlawfully segregates, discriminates against, or otherwise makes unavailable to the individual or class of individuals the goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation.
Whenever the Commission obtains information that a covered entity may have processed or transferred covered data in violation of Federal anti-discrimination laws, the Commission shall transmit such information…to the appropriate Executive agency or State agency with authority to initiate proceedings relating to such violation.
Algorithmic Decision-making
[A] covered entity engaged in algorithmic decision-making…to make or facilitate advertising for housing, education, employment or credit opportunities…or restrictions on the use of, any place of public accommodation, must annually conduct an impact assessment of such algorithmic decision-making that—
(A) describes and evaluates the development of the covered entity’s algorithmic decision-making processes including the design and training data used to develop the algorithmic decision-making process, how the algorithmic decision-making process was tested for accuracy, fairness, bias, and discrimination; and
(B) assesses whether the algorithmic decision-making system produces discriminatory results on the basis of an individual’s or class of individuals’ [protected class]
The Commission shall conduct a study…examining the use of algorithms to process covered data in a manner that may violate Federal anti-discrimination laws.
Enforcement
FTC, state attorneys general, and by individuals through a private right of action.
A plaintiff bringing suit would not be required to prove injury in fact (a violation alone is the injury) and could seek damages up to $1000/violation (or actual damages, if greater).
The bill would also invalidate any pre-dispute arbitration agreement that waives claims arising under this law.
FTC, or other appropriate state or federal agency.
Table 1
Federal Sectoral Legislation
In some cases, sectoral efforts have taken a more dynamic approach to addressing specific harms. For example, Senator Markey (D-MA) introduced the Algorithmic Justice and Online Platform Transparency Act, which would prohibit unlawful discrimination in automated decision-making (as opposed to general data processing, as in COPRA and SAFE DATA) and impose transparency requirements mandating review and assessment of algorithms for disparate impact on protected classes.
Importantly, the bill would explicitly extend public accommodation law to “any commercial entity that offers goods and services through the internet to the general public.” Currently, Title II and III of the Civil Rights Act of 1964 prohibit discrimination on the basis of race, color, national origin, or disability in places of “public accomodation,” such as hotels, restaurants, theaters, and similar physical spaces. The law has not been amended to extend to online commerce (and federal circuit courts are split on the issue with respect to Title III). While COPRA includes “places of public accommodation” within its scope of entities that may not conduct discriminatory data processing, it does not explicitly expand federal anti-discrimination law to online retailers and marketplaces. Markey’s bill would.
In a more recent example, the “Banning Surveillance Advertising Act,” introduced by Anna Eshoo (D-CA) this week, would flatly prohibit targeted advertising based on protected characteristics under current federal anti-discrimination law – such race, color, sex (including sexual orientation and gender expression), and disability. Unlike COPRA, the SAFE DATA Act, and the Markey bill, this legislation contains no small business exemption.
Advocates’ Goals
Most proposals have not gone as far as some civil rights advocates have proposed. For example, the Lawyers’ Committee for Civil Rights Under Law and Free Press introduced a comprehensive Model Bill in March 2019, that would not only would prohibit discriminationin economic opportunities (housing, employment, credit, insurance, or education) and in public accomodations (including any business that offers goods or services through the internet, as in the Markey bill), but also in any manner that would interfere with a person’s right to vote. Similar to COPRA, the Model Bill would also impose auditing requirements for discriminatory processing.
In the Lawyers’ Committee proposal, the law would be enforced by the FTC, the states, the DOJ Civil Rights Division, or through a private right of action. The civil penalty for violation would be heftier than other legislation, with $16,500 per violation (or up to 4% of annual revenue if punitive damages are warranted or the action is brought by the state).
Other notable provisions in the Model Bill which are not in COPRA nor the SAFE DATA Act include:
Expanded Definition of “Privacy Risk.” The expanded definition would include intangible harms such as psychological harm (anxiety, embarrassment, fear), stigmatization or reputational harm, and disruption from unwanted commercial solicitations.
Shifting Burden of Proof. Typically, a party bringing a civil suit has a duty to prove each assertion or claim. Similar to existing civil rights law, however, the Model Bill would utilize a burden-shifting framework: where if the plaintiff demonstrates disparate impact on the basis of a protected characteristic from a data processing activity, the burden would shift to the defendant to show that such processing was necessary to achieve a substantial, legitimate, and nondiscriminatory interest. If the defendant meets that burden, the burden shifts back to the plaintiff to demonstrate that an alternative policy or practice could serve such interest with a less discriminatory effect.
Affirmative Duty to Interrupt. Entities would have a duty to prevent or aid in preventing civil rights violations under the law, where any entity that makes a conscious effort to avoid actual knowledge of violation and has the ability to prevent or halt such violation shall also be liable.
Targeted Advertising. At least some forms of targeted advertising would be regulated as an unfair and deceptive practice through the FTC, taking into consideration factors like predatory or manipulative practices that harm marginalized populations, as well as methods for promoting diversity and inclusion of small businesses owned by underrepresented populations, amongst others.
We anticipate that the debate regarding the scope and substance of civil rights protections in data privacy policy is just beginning. The NTIA intends to publish a Notice and Request for Comments in the Federal Register regarding this topic, where members of the public unable to participate in the Listening Sessions are encouraged to respond.
Brain-Computer Interfaces & Data Protection: Understanding the Technology and Data Flows
This post is the first in a four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.
Click here for FPF and IBM’s full report: Privacy and the Connected Mind. Additionally, FPF-curated resources, including policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are here.
I. Introduction – What are BCIs and Where are They Used?
Today, Brain-Computer Interfaces (BCIs) are primarily used in the health-care context for purposes including rehabilitation, diagnosis, symptom management, and accessibility. While BCI technologies are not yet widely adopted in the consumer space, there is increasing interest and proliferation of new direct-to-consumer neurotechnologies from gaming to education. It is important to understand how these technologies use data to provide services to individuals and institutions, as well as how the emergence of such technologies across sectors can create privacy risks. As organizations work to build BCIs while mitigating privacy risks, it is paramount for policymakers, consumers, and other stakeholders to understand the state of the technology today and associated neurodata and its flows.
BCIs are computer-based systems that directly record, process, or analyze brain-specific neurodata and translate these data into outputs that can be used as visualizations or aggregates for interpretation and reporting purposes and/or as commands to control external interfaces, influence behaviors or modulate neural activity.
BCIs can be broadly divided into three categories: 1) those that record brain activity; 2) those that modulate brain activity; and 3) those that do both, also called bi-directional BCIs (BBCIs).
BCIs can be invasive or non-invasive and employ a number of techniques for collecting neurodata and modulating neural signals.
Neurodata is data generated by the nervous system, which consists of the electrical activities between neurons or proxies of this activity.
Personal neurodata is neurodata that is reasonably linkable to an individual.
BCIs that record brain activity are more commonly used in the healthcare, gaming, and military contexts. Modulating BCIs are typically found in the healthcare context, such as when used to treat Parkinson’s disease and other movement disorders by using deep brain stimulation. BCIs cannot at present or in the near future “read a person’s complete thoughts,” serve as an accurate lie detector, or pump information directly into the brain.
II. BCIs Can Be Invasive or Non-Invasive. Both Employ a Number of Techniques for Recording Neurodata and Modulating Neural Signals
Invasive BCIs are installed directly into—or on top of—the wearer’s brain through a surgical procedure. Today, invasive BCIs are used in the health context for a variety of purposes, such as improving patients’ motor skills. Invasive BCI implants can involve a number of different types of implants. An electrode array called a Utah Array is installed into the brain and relies on a series of small metal spikes set within a small square implant to record or modulate brain signals. Other prominent examples of invasive BCIs rely on electrocorticography (ECoG), where electrodes are attached to the brain’s exposed surface to measure the cerebral cortex’s electrical activity. ECoG is most widely used to help medical providers locate the brain area that is the center of epileptic seizures.
Unlike invasive BCIs, non-invasive BCIs do not require surgery. Instead, non-invasive BCIs rely on external electrodes and other sensors to collect and modulate neural signals. One of the most prominent examples of a non-invasive BCI technology is an electroencephalogram (EEG)—a method for recording the brain’s electrical activity, with electrodes placed on the scalp’s surface to measure neurons’ activity. EEG-based BCIs are common in gaming where collected brain signals are used to control in-game characters and select in-game items. Another noteworthy non-invasive method is near-infrared spectroscopy (fNIRS), which measures proxies of brain activity via changes in blood flow to certain regions, specifically changes in oxygenated and deoxygenated hemoglobin concentration using near-infrared light. fNIRS is especially prominent in wellness and medical BCIs, such as those used to control prosthetic limbs.
Other non-invasive techniques go beyond simply recording neurodata by also modulating the brain. For example, transcranial direct current stimulation (tDCS) and transcranial magnetic stimulation (TMS) are both used to modulate neuroactivity. Non-invasive neurotechnologies should not be equated to non-harmful technologies—just because a device is not directly implanted to sit on or within the brain does not mean that it does not pose unique health and other privacy and data use risks.
Both invasive and non-invasive BCIs are generally characterized by four components:
Signal Acquisition and Digitization: Involves sensors (e.g. EEG, fMRI, ect.) measuring neural signals. The device amplifies to levels that enable processing and sometimes filters collected signals to remove unwanted data elements, such as noise and artifacts. These signals are digitized and transferred to a computer.
Feature Extraction: As part of signal processing, applicable signals are separated from extraneous data elements, including artifacts and other undesirable elements.
Feature Translation: Signals are transformed into usable outputs.
Device Output: Translated signals can be used as visualizations for research or care, or they can be used as directed instructions, including feedforward commands utilized to operate external BCI components (e.g. external software or hardware like a robotic arm) or feedback commands which may provide afferent (conducted inward) information to the user or may directly modulate on-going neural signals.
III. Recorded Neurodata Becomes Personal Neurodata When it is Reasonably Linkable to an Individual
Neurodata is data generated by the nervous system, which consists of the electrical activities between neurons or proxies of this activity. Neurodata can be both directly recorded from the brain—in the case of BCIs—or indirectly recorded from an individual’s spinal cord, muscles, or peripheral nerves.
At times, neurodata can be personally identifiable when reasonably linkable to an individual or when combined with other identifying data associated with an individual, such as when part of a particular user profile. The recording and processing of personal neurodata can produce information related to an individual’s biology and cognitive state that is directly tied to that user’s record, use, or account. Additionally, the processing of personal neurodata can lead to inferences about an individual’s moods, intentions, and various physiological characteristics, such as arousal. Machine learning (ML) sometimes plays a role as a tool for helping determine if a neurodata pattern matches a general identifier or particular class or physiological state. Although identifying an individual based solely on their recorded personal neurodata is difficult, such identification has been shown to be possible with relatively minimal data (less than 30 seconds-worth of electrical activity) within a lab setting. Some experts believe that such identification is feasible more broadly in the near term.
Personal neurodata can reveal seemingly innocuous data; record behavioral interactive activity; include health information associated with an individual; or potentially provide insight into an individual’s feelings or intentions. BCIs may eventually progress into new arenas, recording increasingly sensitive personal neurodata, leading to intimate inferences about individuals. Those applications may seek to include transcribing a wide-range of a wearer’s thoughts into text, serving as an accurate lie detector, and even implanting information directly into the brain. However, these speculative uses are still in the early research phases and could be decades from fruition, or perhaps never emerge.
IV. Conclusion
As BCIs evolve and are more commercially available across numerous sectors, it is paramount to understand the unique risks such technologies pose. Although our report, and this blog series, primarily focus on the privacy concerns—including questions about the transparency, control, security, and accuracy of data— around the existing and emerging BCI capabilities, these technologies also raise important technical considerations and ethical implications, related to, for example fairness, justice, human rights, and personal dignity. We will highlight where additional ethical and technical concerns emerge in various use cases and applications of BCIs throughout this series.
12th Annual Privacy Papers for Policymakers Awardees Explore the Nature of Privacy Rights & Harms
The winners of the 12th annual Future of Privacy (FPF) Privacy Papers for Policymakers Award ask big questions about what should be the foundational elements of data privacy and protection and who will make key decisions about the application of privacy rights. Their scholarship will inform policy discussions around the world about privacy harms, corporate responsibilities, oversight of algorithms, and biometric data, among other topics.
“Policymakers and regulators in many countries are working to advance data protection laws, often seeking in particular to combat discrimination and unfairness,” said FPF CEO Jules Polonetsky. “FPF is proud to highlight independent researchers tackling big questions about how individuals and society relate to technology and data.”
This year’s papers also explore smartphone platforms as privacy regulators, the concept of data loyalty, and global privacy regulation. The award recognizes leading privacy scholarship that is relevant to policymakers in the U.S. Congress, at U.S. federal agencies, and among international data protection authorities. The winning papers will be presented at a virtual event on February 10, 2022.
The winners of the 2022 Privacy Papers for Policymakers Award are:
Privacy Harms, by Danielle Keats Citron, University of Virginia School of Law; and Daniel J. Solove, George Washington University Law School
This paper looks at how courts define harm in cases involving privacy violations and how the requirement of proof of harm impedes the enforcement of privacy law due to the dispersed and minor effects that most privacy violations have on individuals. However, when these minor effects are suffered at a vast scale, individuals, groups, and society can feel significant harm. This paper offers language for courts to refer to when litigating privacy cases and provides advice as to when privacy harm should be considered in a lawsuit.
In this paper, Green analyzes the use of human oversight of government algorithmic decisions. From this analysis, he concludes that humans are unable to perform the desired oversight responsibilities, and that by continuing to use human oversight as a check on these algorithms, the government legitimizes the use of these faulty algorithms without addressing the associated issues. The paper offers a more stringent approach to determining whether an algorithm should be incorporated into a certain government decision, which includes critically considering the need for the algorithm and evaluating whether people are capable of effectively overseeing the algorithm.
The Surprising Virtues of Data Loyalty, by Woodrow Hartzog, Northeastern University School of Law and Khoury College of Computer Sciences, Stanford Law School Center for Internet and Society; and Neil M. Richards, Washington University School of Law, Yale Information Society Project, Stanford Center for Internet and Society
The data loyalty responsibilities for companies that process human information are now being seriously considered in both the U.S. and Europe. This paper analyzes criticisms of data loyalty that argue that such duties are unnecessary, concluding that data loyalty represents a relational approach to data that allows us to deal substantively with the problem of platforms and human information at both systemic and individual levels. The paper argues that the concept of data loyalty has some surprising virtues, including checking power and limiting systemic abuse by data collectors.
Smartphone Platforms as Privacy Regulators, by Joris van Hoboken, Vrije Universiteit Brussels, Institute for Information Law, University of Amsterdam; and Ronan Ó Fathaigh, Institute for Information Law, University of Amsterdam
In this paper, the authors look at the role of online platforms and their impact on data privacy in today’s digital economy. The paper first distinguishes the different roles that platforms can have in protecting privacy in online ecosystems, including governing access to data, design of relevant interfaces, and policing the behavior of the platform’s users. The authors then provide an argument as to what platforms’ role should be in legal frameworks. They advocate for a compromise between direct regulation of platforms and mere self-regulation, arguing that platforms should be required to make official disclosures about their privacy-related policies and practices for their respective ecosystems.
China enacted the first codified personal information protection law in China in late 2021, the Personal Information Protection Law (PIPL). In this paper, Wang compares China’s PIPL with data protection laws in nine regions to assist overseas Internet companies and personnel who deal with personal information in better understanding the similarities and differences in data protection and compliance between each country and region.
Cameras are everywhere, and with the innovation of video analytics, there are questions being raised about how individuals should be notified that they are being recorded. This paper studied 123 individuals’ sentiments across 2,328 video analytics deployments scenarios to inform their conclusion. In their conclusion, the researchers advocate for the development of interfaces that simplify the task of managing notices and configuring controls, which would allow individuals to communicate their opt-in/opt-out preference to video analytics operators.
From the record number of nominated papers submitted this year, these six papers were selected by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. The winning papers were selected based on the research and solutions that are relevant for policymakers and regulators in the U.S. and abroad.
In addition to the winning papers, FPF has selected two papers for Honorable Mention: Verification Dilemmas and the Promise of Zero-Knowledge Proofs by Kenneth Bamberger, University of California, Berkeley – School of Law; Ran Canetti, Boston University, Department of Computer Science, Boston University, Faculty of Computing and Data Science, Boston University, Center for Reliable Information Systems and Cybersecurity; Shafi Goldwasser, University of California, Berkeley – Simons Institute for the Theory of Computing; Rebecca Wexler, University of California, Berkeley – School of Law; and Evan Zimmerman, University of California, Berkeley – School of Law; and A Taxonomy of Police Technology’s Racial Inequity Problems by Laura Moy, Georgetown University Law Center.
FPF also selected a paper for the Student Paper Award, A Fait Accompli? An Empirical Study into the Absence of Consent to Third Party Tracking in Android Apps by Konrad Kollnig and Reuben Binns, University of Oxford; Pierre Dewitte, KU Leuven; Max van Kleek, Ge Wang, Daniel Omeiza, Helena Webb, and Nigel Shadbolt, University of Oxford. The Student Paper Award Honorable Mention was awarded to Yeji Kim, University of California, Berkeley – School of Law, for her paper, Virtual Reality Data and Its Privacy Regulatory Challenges: A Call to Move Beyond Text-Based Informed Consent.
The winning authors will join FPF staff to present their work at a virtual event with policymakers from around the world, academics, and industry privacy professionals. The event will be held on February 10, 2022, from 1:00 – 3:00 PM EST. The event is free and open to the general public. To register for the event, visit https://bit.ly/3qmJdL2.
Overcoming Hurdles to Effective Data Sharing for Researchers
In 2021, challenges faced by academics in accessing corporate data sets for research and the issues that companies were experiencing to make privacy-respecting research data available broke into the news. With its long history of research data sharing, FPF saw an opportunity to bring together leaders from the corporate, research, and policy communities for a conversation to pave a way forward on this critical issue. We held a series of four engaging dinner-time conversations to listen and learn from the myriad voices invested in research data sharing. Together, we explored what it will take to create a low-friction, high-efficacy, trusted, safe, ethical, and accountable environment for research data sharing.
FPF formed an expert program committee to set the agenda for the discussion series. The committee guided our selection of topics to discuss, helped identify talented experts to present their views, and introduced FPF to new and salient stakeholders to the research data sharing conversation. The four virtual dinners were held on Thursday, November 4, November 16, December 2, and December 18. Below are significant points of discussion from each event.
The Landscape of Data Sharing
During the first dinner discussion, participants emphasized the importance of reviewing research for ethical soundness and methodological rigor. Many highlighted the challenges of performing consistent and fair ethical and methodological reviews given corporate and research stakeholders’ different expectations and capabilities. FPF has explored this dynamic in the past: both companies and researchers operate with a responsibility to the public that requires technical, ethical, and organizational work to fulfill. The ability of critical stakeholders, including consumers themselves, to articulate the clear and practical steps they take to build trusted public engagement in data sharing varies widely.
Participants offered that one of the key steps necessary to improve public and stakeholder trust in data sharing is to improve education for all parties on the topic. In particular, current efforts should be revised and expanded to more intuitively explain data collection, stewardship, hygiene, interoperability, and the differences in corporate and researchers’ data needs and expectations. Participants suggested improving consumers’ digital literacy so that consent to collecting or using personal data can be more meaningful and dynamic.
Research Ethics and Integrity for a Data Sharing Environment
During our second dinner, two topics emerged. First, participants pointed out how regulations and organizational rules limit the ability of institutions to superintend the ethical, technical, and administrative reviews called for in discussions of data sharing.
Second, the participants honed in on data de-identification and anonymization as critical components of ethical and technical review of proposed data uses for research. While variations in the interpretation of research ethics regulations and norms by Institutional Review Boards (IRBs) lead to an inconsistent and shifting landscape for researchers and companies, the expert panelists pointed out that the variation between IRBs is not as significant as the variation between regulatory controls for research governed by federal restrictions (the Common Rule) and those applied to commercial research under consumer protection laws.
Several participants advocated for a comprehensive U.S. federal data privacy law to equalize institutional variations, eliminate gaps between consumer data protection and research data protections, and clarify protections for research uses of commercial data. Efforts to close such regulatory gaps would require educating all stakeholders, including legislators, researchers, data scientists, and companies’ data protection officers, about the relative differences between risks around research data and risks associated with commercial use or breach of consumer data.
While participants recommended comprehensive privacy legislation as an ideal, serious consideration was paid to the role that specific agency rule-making efforts could play in this space. One of the topics for rulemaking was the concept of data anonymization. Participants considered how to achieve agreement on the ethical imperative for data anonymization. They identified some important steps toward anonymization, such as developing a more agreeable definition of “anonymous” that could be implemented by the many different parties involved in the research data sharing process and providing essential technical support to achieve the expected standards of data anonymization.
The Challenges of Sharing Data from the Perspective of Corporations
During our third dinner, the discussion focused on assessing researchers’ fitness to access an organization’s data. We also discussed evaluating research projects in light of public interest expectations. There was widespread agreement that data sharing is vital for various reasons, such as promoting the next generation of scientific breakthroughs and holding companies publicly accountable. On the other hand, there was disagreement on ensuring that data is available for research and that individuals’ privacy is continuously protected.
Some asserted that privacy was being used as an argument by companies to protect their interests and that it is not as tricky a standard to achieve as is described. Others disagreed with this assessment, saying that they always assumed the worst when it came to the efficacy of privacy protections.
There are also technical and social barriers to democratizing access to corporate data for research. Participants pointed out that technical barriers can be low bars, like file size and type, or high barriers, such as overcoming data fragmentation, including personnel expertise when reviewing projects, building and maintaining shareable data, and managing sector-specific privacy legislation that governs what companies must do to achieve existing data privacy requirements.
Social barriers were discussed as high bars, like limiting access to researchers affiliated with the “right” institutions. Participants discussed how to sufficiently democratize know-how to expand corporate data-sharing and build and maintain the trusted network relationships critical for facilitating data sharing across various parts of the researcher-company environment. Consent reemerged as both a technical and social barrier to data sharing. In particular, participants addressed the problem of securing consumers’ meaningful consent for the use of data in unforeseen but beneficial research use cases that may arise far in the future.
Legislation, Regulation, and Standardization in Data Sharing
During the final dinner conversation, participants tackled the challenging issues of legislation, regulation, and standardization in the research data sharing environment. There was broad agreement that there should be standards for data sharing to make the process more accessible and data more usable. Most participants agreed that data should be FAIR and harmonious. Still, there was disagreement over what field or institution is a good model for this (economics, astronomy, and the US Census were discussed as possibilities).
There was agreement that researchers should meet a certain standard to be given access, but this must be done carefully to avoid creating tiers of first and second-class researchers. The discussion highlighted the importance of having shared standards, vocabulary, terminology, and expectations about the amount of data and supporting material to be transferred.
Interoperability of terms, ontologies, and expectations was another concern flagged throughout the dinner; merely having data available to researchers does not guarantee that they can use it. There was disagreement about what kind of role the National Institutes of Standards and Technology (NIST), the Federal Trade Commission (FTC), and the National Science Foundation (NSF), or researchers’ professional institutions should play or if all of them should play a role in enforcing these standards.
Having access to the code used to process data represents another barrier to research. It isn’t easy to replicate experiments and make discoveries without interoperability and code sharing. There was agreement that an unethical side of data use could complicate any efforts to create positive benefits. Those challenges include zombie data, predatory publication outlets, rogue analysts, and restricting access to research that may have national security implications.
Some Topics Came Up Repeatedly
Persistent topics of discussion throughout the dinners that should be addressed through future legislative or regulatory efforts included: ensuring data quality, data storage requirements (i.e., whether data resides with the firm or with a third party), the incentive structure for academics to share their data with other scholars and with companies, and the emerging role for synthetic data as a method for sharing valuable data representation without transferring the customers’ actual specific and sensitive data.
The series also tackled challenging privacy questions in general, such as: are there special considerations for sharing the data of children or teens (or other vulnerable or protected classes)? Is there a role for funders and publishers to more strongly require documentation for verifying accountability around the use of shared data? Is there a need for involvement by the Office of Research Integrity (ORI) and research misconduct investigators in the supervision of research data sharing?
Next steps toward Responsible Research Data Sharing
In the coming weeks and months, FPF will work with participants in the dinner series to consolidate the knowledge shared during the salon series into a “Playbook for Responsible Data Sharing for Research.” Developed for corporate data protection officers and their counterparts in research institutions, this playbook will cover:
the contracting, capacity-stabilization, and accountability-assurances that should govern research projects using shared data;
managing review of ethics and research project design while respecting research independence review the design of research projects using shared data;
the challenges that researchers must surmount to access and use shared data resources;
the need for effective communication of the findings from such research projects.
We look forward to sharing the “Playbook for Responsible Data Sharing for Research” with the FPF community and our many new friends and partners from the research community in the early months of 2022. Follow FPF on LinkedIn and Twitter, and subscribe to email to receive notification of its release.
FPF in 2021: Delivering Privacy Insights & Expert Analysis
With the last days of 2021 upon us, we wanted to take a moment to reflect on this exciting year that saw FPF expand its presence both domestically and around the globe, while producing engaging events, thought-provoking analysis, and insightful reports with real-world impact.
Growing Global Expertise
The scope of FPF’s international work continued to expand this year, as policymakers around the world are focused on ways to establish or improve privacy frameworks. More than 120 countries have now enacted a privacy or data protection law, and FPF both closely followed and advised upon significant developments in Asia, the European Union, and Latin America.
FPF saw its presence in Asia grow substantially this year with the opening of the FPF Asia-Pacific office, headed by Dr. Clarisse Girot. The FPF Asia-Pacific office will provide expertise in digital data flows and discuss emerging data protection issues in a way that is useful for regulators, policymakers, and data protection professionals. Along with the opening of the office, FPF also announced a partnership with the Asian Business Law Institute (ABLI) to support the convergence of data protection regulations and best privacy practices in the Asia-Pacific region. The Asia-Pacific office held several events in the months following its opening, including a virtual event during Singapore’s Personal Data Protection Week and an event co-hosted with the Asian Development Bank titled Trade-Offs or Synergies? Data Privacy and Protection as an Engine of Data-Driven Innovation.
Following the Indian government’s passage of regulations that placed strict rules for the removal of illegal content and automated scanning of online content, FPF published a review of the new rules and included relevant resources with more information. This year also saw FPF announce Malavika Raghavan as the new Senior Fellow for India. This appointment further expanded FPF’s reach in Asia to one of the key jurisdictions for the future of data protection and privacy law.
International data flows have been an important topic of discussion over the past year. Following the Schrems II decision in 2020, which had serious implications for data flows coming from the EU into the US, the FPF global team created a series of informative infographics that explains the complexity of international data flows in two distinct contexts: retail and education services.
Scholarship & Analysis on Impactful Topics
The core of FPF’s work remains focused on providing insightful, independent analysis on pressing privacy issues. 2021 saw FPF provide this important leadership through events, awards, projects, papers, and more, providing insights into issues such as academic data sharing, digital contact tracing technologies, and neurotechnologies.
For the second year, FPF recognized privacy-protective research collaboration between a company and researchers with the Award for Research Data Stewardship. The first winning project this year is a collaboration between Stanford Medicine researchers led by Tejaswini Mishra, Ph.D., Professor Michael Snyder, Ph.D., and medical wearable and digital biomarker company Empatica. The other team recognized is a collaboration between Google’s COVID-19 Mobility Reports and COVID-19 Aggregated Mobility Research Dataset projects, and researchers from multiple universities in the United States and around the globe. These projects demonstrated how privately-held data can be responsibly shared with academic researchers, supporting significant progress in medicine, public health, education, social science, and other fields.
FPF created a new Open Banking Working Group to discuss issues surrounding open banking. FPF has released several blog posts and hosted events on the topic, with more to come in the new year.
FPF offered resources and best practices for a variety of topics this year. In August, with support from the Robert Wood Johnson Foundation, we developed actionable guiding principles to bolster the responsible implementation of digital contact tracing technologies. The principles we laid out allow organizations implementing this technology to do so in a way that takes a responsible approach to how their technology collects, tracks, and shares personal information.
It is important to take steps to ensure equity in access to DCTT and understand the societal risks and tradeoffs that may accompany its implementation. Privacy leaders who understand these risks will be better able to bolster trust in this technology within their organizations.
To better assist organizations’ shared mobility data access and reduce privacy risks in their data-sharing process, FPF and SAE’s Mobility Data Collaborative (MDC) created a transportation-tailored privacy assessment that provides practical guidance for data from ride-hailing services, e-scooters, or bike-sharing programs.
“Micromobility services can play a key role in improving access to jobs, food and health care. However, there are multiple factors for companies and government agencies to consider before sharing mobility data with other organizations, including the precision, immediacy, and type of data shared.”
FPF and the Privacy Tech Alliance released a report titled, “Privacy Tech’s Third Generation: A Review of the Emerging Privacy Tech Sector,” which analyzed the evolving privacy technology market, examined trends and predictions in the field, and identified five market trends and their implications for the future. The report focused on the COVID-19 pandemic’s role in accelerating the global marketplace adoption of privacy tech.
FPF held a series of workshops focused on manipulative design with technical, academic, and legal experts to define clear areas of focus for consumer privacy, and guidance for policymakers and legislators. These workshops looked at manipulative design through a variety of different contexts including youth and education, online advertising and U.S. law, and GDPR and European law. The issue of manipulative design, transparency, and trust was also discussed during the first annual Dublin Privacy Symposium, which was hosted by FPF.
In collaboration with the IBM Policy Lab, FPF released a set of recommendations to promote privacy and mitigate risks associated with brain-computer interfaces. The report provides developers and policymakers with actionable ways this technology can be implemented while protecting the privacy and rights of its users. Following the release of the report, FPF and the IBM Policy Lab hosted an online event discussing the report and the brain-computer interface field more broadly.
FPF recognizes the need for access to personal information for independent research and for platform accountability and supports this research when it is done responsibly. In November and December, FPF hosted a series of salon dinners titled, “Promoting Responsible Research Data Access,” which brought together the many voices needed for a robust conversation on how we can unlock data for scientific research and will lead to a playbook for privacy-protective research access to corporate data.
Expanding the Conversation Around Responsible Data Use
FPF continues to convene industry experts, academics, consumer advocates, and other experts to explore the challenging issues in the data privacy field. Members of our team have also testified in front of state and national legislative bodies as experts for potential privacy legislation.
For the 11th year in a row, FPF recognized leading privacy research and analytical work with the Privacy Papers for Policymakers Award held virtually for the first time. The winners spoke on their research in front of an audience of academic, industry, and policy professionals in the privacy field. The event was headlined by a keynote address by FTC Chairwoman Rebecca Kelly Slaughter, her first major speech as then acting chair of the FTC. In her remarks, she focused on making enforcement more efficient and effective, how to protect privacy during the pandemic, and the overlap of COVID-19 and issues related to privacy.
FPF launched a new training program in 2021 focused on the use of data-driven technologies. The Understanding Digital Data Flows training program provided a deep dive into how technology and personal data are utilized in a variety of sectors. The training sessions were led by FPF experts and discussed topics including artificial intelligence, de-identification, and more. These informative trainings will continue into 2022 and the first eight sessions are already open for registration.
In the same vein, FPF released a series of insights for lawyers to understand before advising clients on issues of artificial intelligence. Among the insights were an explanation of AI’s probabilistic, complex, and dynamic nature, the importance of transparency in AI use, and the issue that algorithmic bias presents to AI users.
Laws like ECOA, GDPR, CPRA, the proposed EU AI regulation, and others are forming a legal foundation for regulating AI… As more organizations begin to entrust AI with high-stakes decisions, there is a reckoning on the horizon.
To add to the conversation surrounding COPPA and verifiable parental consent, FPF released a report outlining suggested solutions collected through research and insights from stakeholders. In the report, key friction points in the verifiable consent process are identified, which include: efficiency, accessibility, privacy and security, and convenience and cost barriers. Throughout the year, FPF collected comments from industry professionals, advocates, and academics to help identify possible solutions to untangle the challenges associated with verifiable parental consent, which will inform our work in 2022.
Following the release of a report which provided recommendations on the use of augmented and virtual reality technologies, FPF hosted XR Week, a week dedicated to ethical and privacy concerns of AR and VR technologies. The week included several events including a roundtable with expert participants and several conversations held in a virtual reality space.
During debate over Maryland HB 1062, which proposed several updates to Maryland’s Student Data Privacy Act, FPF’s Amelia Vance testified in front of the Maryland House Ways and Means Committee on the bill. In her testimony, Amelia voiced her approval of many of the proposed updates and offered recommendations on two amendments, clarifying how the bill defines “operator,” and the scope of the Council’s recommendations.
The FPF Youth & Education team released a series of resources focused on school surveillance and student monitoring. In October, the team released an infographic, “Understanding Student Monitoring,” that depicts reasons schools monitor student activity, what types of data are being monitored, and how that data can be utilized. Following reports that the Pasco County (FL) Sheriff’s Office was keeping a list of students who may be “potential criminals,” FPF released a report advocating for transparency and accountability for parents and students, FERPA compliance, and more robust privacy training for law enforcement and SROs.
Earlier this month, Stacey Gray testified in front of the U.S. Senate Finance Subcommittee on Fiscal Responsibility and Economic Growth on consumer privacy in the technology sector. Her testimony focused on the term “data brokers” and explained how third-party data processing is central to many concerns around privacy, fairness, accountability, and crafting effective privacy regulation.
The FPF team welcomed many new faces during 2021 and saw the promotion of key staff members to senior positions. John Verdi became Senior Vice President of Policy, Amelia Vance was elevated to Vice President of the Youth & Education program, Gabriela Zanfir-Fortuna was promoted to Vice President for Global Privacy, and Stacey Gray was promoted to Director of Legislative Research & Analysis. This year, the leadership team also saw the addition of Amie Stepanovich as Vice President of U.S. Policy and Rebekah Stroman as Chief of Staff. 2021 also saw us welcome Clarisse Girot, Lee Matheson, Keir Lamont, Tatiana Rice, Nancy Levesque, Payal Shah, Joanna Grama, and Jim Siegl to the FPF staff.
“The FPF team has grown to meet the need for independent privacy expertise, especially in the international, youth & education, and policy spaces,” said Jules Polonetsky, CEO of FPF. “I could not be more proud of the high-quality work that the FPF staff has produced to increase understanding of how technology impacts civil and human rights. We’re looking forward to 2022 and wish everyone a Happy Holidays and a Happy New Year.”
This is by no means a comprehensive list of all of FPF’s important work in 2021, but we hope it gives you a sense of the impact that our work had on both the privacy community and society at large. Keep updated on FPF’s work by subscribing to our monthly briefing and following us on Twitter and LinkedIn.
On behalf of the entire FPF staff we wish you a Happy New Year!
Public Comments Surface Fault Lines in Expectations for New California Privacy Law
In November 2020, California voters adopted the California Privacy Rights Act (“CPRA”) ballot initiative, which was developed to strengthen and expand upon the underlying California Consumer Privacy Act (“CCPA”) that the state legislature adopted in 2018. While the CPRA provides for significant new consumer rights and responsible data processing obligations on covered businesses, many questions regarding the scope and practical operation of these requirements remain unresolved. A recently released set of public comments on a CPRA rulemaking process brings some of these contested issues into sharper focus.
The CPRA delegates both rulemaking and enforcement authority to a brand new, privacy-specific body, the California Privacy Protection Agency (“the Agency”). Following the appointment of a governing board, the Agency took its first public-facing steps towards rulemaking in September, 2021, issuing an invitation for comment on 8 topics focused on new and undecided issues introduced by the CPRA. Last week, the Agency published approximately 70 submissions that it received during the course of its 45-day comment period.
A variety of individuals and organizations filed comments including trade associations and companies representing diverse industry sectors, consumer rights groups, and academics. One noteworthy filing is from Californians for Consumer Privacy, a nonprofit organization helmed by Alastair Mactaggart. Given the group’s role in drafting the California Privacy Rights Act ballot initiative and driving the public advocacy campaign that led to its adoption, these comments are indicative of the intent behind some of the ambiguous and contested provisions of the CPRA.
Across hundreds of pages of comments, stakeholders displayed sharp disagreements on what the CPRA does and should require on multiple consequential issues. These contested topics for CPRA rulemaking include (1) how businesses should conduct and submit privacy and security risk assessments, (2) the ways that automated decisionmaking technologies shall be regulated, (3) whether the CPRA requires the recognition of user enabled opt-out signals, (4) the scope of the Agency’s audit authority, and (5) how the Agency should further define and regulate manipulative design interfaces known as “dark patterns.”
1. Privacy and Security Risk Assessments
The CPRA brings California into greater alignment with other global and domestic privacy frameworks by requiring organizations engaged in data processing that poses a “significant risk” to consumer privacy and security to conduct and submit to the Agency risk assessments on a “regular basis.” However, the CPRA leaves many details to Agency regulations, including the specific activities that trigger the requirement to conduct an assessment, the scope and procedures for completing assessments, and the cadence for submitting assessments to the Agency. Comments revealed a variety of preferences for how and when businesses should be required to conduct and submit assessments.
Filings from industry stakeholders frequently raised concerns that the adoption of overly formalistic procedures and reporting requirements for risk assessments would create unnecessary burdens to both businesses and the Agency. Multiple industry groups suggested that assessments should be submitted to the Agency only upon request (consistent with the Virginia and Colorado privacy laws), or, if mandatory, once every 3 years. Civil society organizations typically sought to impose more expansive assessment requirements on covered businesses, with one coalition arguing that assessments should be conducted in advance of any change in business practices that “might alter the resulting risks to individuals’ privacy,” and be resubmitted to the Agency at 6 month intervals.
Californians for Consumer Privacy encouraged the Agency to adopt a graduated approach, with requirements to conduct risk assessments initially falling on only large processors of personal information. The group further suggested variable timing requirements for submitting those assessments established on the basis of the “intensity” of personal information and sensitive personal information processing.
2. Automated Decisionmaking Technology
The CPRA directs the Agency to develop regulations “governing access and opt-out rights” with respect to the use of automated decisionmaking technology, (“ADT”) including “profiling.” The Agency sought comments on multiple aspects of these rights, including the activities that should constitute regulated ADT, what businesses should do to provide consumers with “meaningful information about the logic” of automated decisionmaking processes, and the scope of consumers’ opt-out rights with regards to ADT. Industry and civil society comments differed in how to define the scope of ADT and whether the CPRA creates a standalone consumer right to opt-out of ADT beyond the CPRA’s rights to opt-out of the sale and sharing of personal information and to limit the use of sensitive personal information.
Numerous commenters, including the Future of Privacy Forum, recommended that the Agency define the scope of regulated ADT to decisions that produce “legal or similarly significant effects” to consumers, noting a similar standard under the GDPR. Legal or similarly significant effects would include, for example, automatic refusal of an online credit application; decisions made by online job recruitment platforms; decisions that affect other financial, credit, employment, health, or education opportunities and likely, in certain contexts, behavioral advertising.
Several industry groups such as the California Grocers Association further sought to ensure that the regulations will govern only “fully” automated processes that produce “final” decisions. Supporting this analysis, many commenters pointed to a universe of clearly low-risk, socially beneficial tools such as calculators, spreadsheets, GPS systems, and spell-checkers that could be swept up by overly broad regulation. Civil society groups including EFF and EPIC largely took a different approach, arguing that given emerging concerns of algorithmic harm and bias, the Agency’s regulations should more broadly define ADT, to include, for example, “systems that provide recommendations, support a decision, or contextualize information.”
Notably, Californians for Consumer Privacy argued that the Agency’s regulations should “specify that consumers have the right to opt-out of this automated decisionmaking” (referencing the online advertising ecosystem), and that the Agency should subsequently expand the right to opt-out of ADT to “other areas of online and business activity.” In stark contrast to this view, several industry groups argued that the Agency cannot create a standalone consumer right to opt-out of ADT as such a right is not provided for in the CPRA itself. Two prominent trade associations, CTIA and TechNet, further asserted that such a delegation of rulemaking authority would be “unconstitutional.”
3. Opt-Out Preference Signals
One of the most high profile debates in the present consumer privacy landscape concerns the adoption of “user-enabled global privacy controls,” a potentially broad array of technical signals first recognized under the CCPA’s regulations. In July 2021, a California Attorney General FAQ page was updated to assert that one such tool, a browser-signal named the Global Privacy Control (“GPC”) “must be honored by covered businesses as a valid consumer request to stop the sale of personal information.” The public comments revealed stark differences in statutory interpretation as to whether or not the CPRA requires that businesses honor this class of controls.
Industry groups including ESA, California Retailers Association, and the California Chamber of Commerce largely adopted the interpretation that the text of the CPRA makes business recognition of opt-out preference signals optional, based on the reading that CPRA sections 1798.135(b)(1),(3) offer multiple paths to complying with the exercise of user rights. One exception came from Mozilla, which recently implemented the GPC in the Firefox browser, and noted that enforceability of preference signals under the CCPA “remains ambiguous” and encouraged the Agency to expressly require that companies comply with the GPC under the CPRA.
On the other hand, civil society organizations tended to argue that the CPRA expressly mandates the recognition of global signals, pointing to section 1798.135(e), which concerns the exercise of consumer rights (including by opt-out signals) carried out by other authorized persons. Consumer Reports argued that recognition of these signals is required by the “plain language” of this provision and also noted that this interpretation would be consistent with the CPRA’s stated purpose of strengthening the CCPA. Californians for Consumer Privacy also took a firm stance, arguing that “there is no reading of the statute that would allow a business to [refuse] to honor a global opt-out signal enabled by a consumer” and criticized “misinformation we have seen from the advertising and technologies industries” on the scope of CPRA opt-out rights.
In the Future of Privacy Forum’s comments, we noted that regardless of whether the recognition of global opt-out signals is mandated or voluntary, the Agency has an important opportunity to set clear standards for the adoption of signals that will comply with the CPRA, GDPR, or the Colorado Privacy Act (which will require recognition of certain preference signals by 2025). In this context, the Agency should work with expert stakeholders to address many unresolved operational issues, such as how signals should be interpreted if they conflict with other consumer choices, and establish procedures for the approval of new signals over time.
4. Agency Audit Authority
The CPRA empowers the Agency to conduct audits of businesses to ensure compliance with the Act. Again, many of the details for the breadth and conduct of such audits are left to rulemaking, and the Agency requested expansive feedback on issues including the scope of its audit authority, the processes that audits should follow, and the criteria the Agency should use in selecting businesses to audit.
Californians for Consumer Privacy stated that the Agency Auditor’s scope should “only be limited by whether a request is reasonably linked to a potential violation of the CPRA.” The group further argued that the Agency should leave the determination of its auditing criteria to its Executive Director and Auditor rather than through rulemaking, so as not to alert businesses to these factors.
In contrast, industry groups suggested multiple approaches to clearly defining audit authority and criteria. Popular recommendations include requirements that the Agency (1) have evidence of a violation of a substantive provision of the CPRA that risks significant harm to consumers prior to initiating an audit, (2) provide 90-days notice to a business prior to an audit, (3) impose guardrails to ensure that audits are separate and independent from the Agency’s investigation and enforcement teams, and (4) create “fair and equal treatment” rules for determining what companies are audited.
5. “Dark Patterns”
Finally, the Agency requested feedback on a number of definitions used by the CPRA including manipulative design interfaces known as “dark patterns.” The CPRA defines “dark patterns” as “a user interface designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision-making, or choice, as further defined by regulation.” The Act contains relatively limited prohibitions on their application, stating that the use of “dark patterns” invalidates user “consent” and further directs Agency rulemaking to ensure that web pages that permit users to opt back-in to the sale or use of their information under the CPRA do not utilize “dark patterns.” Nevertheless, the concept of “dark patterns” has received increasing regulatory attention in recent years and has been flagged by Agency Board Chairperson Urban as a potential subject for discussion at a forthcoming series of “informational hearings.”
Industry groups such as the Internet Association raised concerns with the definition of “dark patterns” under the CPRA, arguing that essentially any interface could be interpreted as impairing user choice and therefore be considered a “dark pattern” under the Act, including the use of privacy-protective default settings. Several of these organizations requested that the definition of “dark patterns” be narrowed to focus on design practices that amount to consumer fraud and encouraged forthcoming regulations to provide clear examples of such conduct.
In contrast, a group of Stanford academics led by Professor Jen King suggested regulation on this subject beyond the context of consent interfaces and specifically requested an expanded definition of “dark patterns” to encompass novel interfaces such as voice activated systems. Similarly, despite raising concerns with the suitability of the term “dark patterns,” Common Sense Media suggested defining manipulative designs “as broadly as possible” to include features that encourage children to share personal information.
Conclusion
The Agency’s request for comments has revealed significant divergences in policy and statutory interpretation between stakeholders for the appropriate scope and application of CPRA requirements. Forthcoming resolution of contested issues through Agency rulemaking will likely carry significant implications for the exercise of consumer rights under the CPRA as well as the practical compliance obligations for covered businesses. Interested parties will hope to learn more about the ultimate scope and operation of the CPRA in early-2022, when the Agency intends to publish its initial set of proposed regulations and statement of reasons.