Privacy Best Practices for Rideshare Drivers Using Dashcams
FPF & Uber Publish Guide Highlighting Privacy Best Practices for Drivers who Record Video and Audio on Rideshare Journeys
FPF and Uber have created a guide for US-based rideshare drivers who install “dashcams” – video cameras mounted on a vehicle’s dashboard or windshield. Many drivers install dashcams to improve safety, security, and accountability; the cameras can capture crashes or other safety-related incidents outside and inside cars. Dashcam footage can be helpful to drivers, passengers, insurance companies, and others when adjudicating legal claims. At the same time, dashcams can pose substantial privacy risks if appropriate safeguards are not in place to limit the collection, use, and disclosure of personal data.
Dashcams typically record video outside a vehicle. Many dashcams also record in-vehicle audio and some record in-vehicle video. Regardless of the particular device used, ride-hail drivers who use dashcams must comply with applicable audio and video recording laws.
The guide explains relevant laws and provides practical tips to help drivers be transparent, limit data use and sharing, retain video and audio-only for practical purposes, and use strict security controls. The guide highlights ways that drivers can employ physical signs, in-app notices, and other means to ensure passengers are informed about dashcam use and can make meaningful choices about whether to travel in a dashcam-equipped vehicle. Drivers seeking advice concerning specific legal obligations or incidents should consult legal counsel.
Privacy best practices for dashcams include:
Give individuals notice that they are being recorded
Place recording notices inside and on the vehicle.
Mount the dashcam in a visible location.
Consider, in some situations, giving an oral notification that recording is taking place.
Determine whether the ride sharing service provides recording notifications in the app, and utilize those in-app notices.
Only record audio and video for defined, reasonable purposes
Only keep recordings for as long as needed for the original purpose.
Inform passengers as to why video and/or audio is being recorded.
Limit sharing and use of recorded footage
Only share video and audio with third parties for relevant reasons that align with the original reason for recording.
Thoroughly review the rideshare service’s privacy policy and community guidelines if using an app-based rideshare service, and be aware that many rideshare companies maintain policies against widely disseminating recordings.
Safeguard and encrypt recordings and delete unused footage
Identify dashcam vendors that provide the highest privacy and security safeguards.
Carefully read the terms and conditions when buying dashcams to understand the data flows.
Uber will be making these best practices available to drivers in their app and website.
Many ride-hail drivers use dashcams in their cars, and the guidance and best practices published today provide practical guidance to help drivers implement privacy protections. But driver guidance is only one aspect of ensuring individuals’ privacy and security when traveling. Dashcam manufacturers must implement privacy-protective practices by default and provide easy-to-use privacy options. At the same time, ride-hail platforms must provide drivers with the appropriate tools to notify riders, and carmakers must safeguard drivers’ and passengers’ data collected by OEM devices.
In addition, dashcams are only one example of increasingly sophisticated sensors appearing in passenger vehicles as part of driver monitoring systems and related technologies. Further work is needed to apply comprehensive privacy safeguards to emerging technologies across the connected vehicle sector, from carmakers and rideshare services to mobility services providers and platforms. Comprehensive federal privacy legislation would be a good start. And in the absence of Congressional action, FPF is doing further work to identify key privacy risks and mitigation strategies for the broader class of driver monitoring systems that raise questions about technologies beyond the scope of this dashcam guide.
The Children’s Online Privacy Protection Act (COPPA), enacted by Congress in 1998, aims to give parents more control over the information collected about their children online. The law requires operators of games, websites, apps, and other online services catered to users under the age of 13 to obtain permission from a child’s parent before collecting information about them. Protected data includes a child’s personal details, such as name, home address, email address, and phone number; geo-location information; online activity tracking data; and photo, video, and audio files.
Interest in children’s online privacy and safety is high and likely to continue to grow in the coming months. Congressional activityis picking up, and the FTC’s latest review of the COPPA rule is ongoing, with a draft rule expected at some point in 2022. Policymakers must understand the current state of play for kids online as they continue to have these important discussions, and we welcome the opportunity to discuss these issues further. Please feel free to contact us here at any time.
BCIs & Data Protection in Healthcare: Data Flows, Risks, and Regulations
This post is the second in a four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.
Click here for FPF and IBM’s full report: Privacy and the Connected Mind. In case you missed it, read the first blog post in this series, which unpacks BCI technology. Additionally, FPF-curated resources, including policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are here.
I. Introduction: What are BCIs?
BCIs are computer-based systems that directly record, process, or analyze brain-specific neurodata and translate these data into outputs. Those outputs can be used as visualizations or aggregates for interpretation and reporting purposes and/or as commands to control external interfaces, influence behaviors or modulate neural activity. BCIs can be broadly divided into three categories: 1) those that record brain activity; 2) those that modulate brain activity; or 3) those that do both, also called bi-directional BCIs (BBCIs).
BCIs can be invasive or non-invasive and employ a number of techniques for collecting neurodata and modulating neural signals. Neurodata is data generated by the nervous system, which consists of the electrical activities between neurons or proxies of this activity. This neurodata may be “personal neurodata” if it is reasonably linkable to an individual.
II. Health-related BCIs Diagnose Medical Conditions, Modulate Brain Activity for Cognitive Disorder Management, and Promote Accessibility
Facilitating Diagnoses: BCIs can be used to help make certain diagnoses by providing a means for practitioners to quantify fatigue, identify depression, and measure stress. Diagnostic BCIs can also assist even when a patient is unable to provide responses. These situations may occur when patients experience disorders of consciousness, such as locked-in syndrome, whereby individuals are fully conscious but unable to move, speak, or explain how they are feeling. Additionally, current research efforts focus on BCI applications that diagnose the stage and advancement of progressive conditions, such as glaucoma.
Modulating the Brain to Treat or Overcome Conditions: While diagnosis typically involves simply recording brain activity, other health-related BCI uses may actively modulate patients’ brains and nervous systems. For example, brain modulation can be used to disrupt seizures for epilepsy patients. Recent advances in interventive BCI modulation include a vision restoration study in which the image bypasses the eye and the optic nerve in order to feed directly to the brain—resulting in low-resolution vision capabilities.
Improving Accessibility and Rehabilitation Opportunities: The latest prosthetic limbs (i.e., neuroprosthetics) rely on BCIs, which enable the limbs to move in response to thought stimuli. Examples of this BCI application include robotic arms, as well as BCI-powered automatic wheelchairs. User control over neuroprosthetics and personal devices are operated by BCIs collecting neurodata about intended limb movements or an activity associated with what the user wants to do. An example of the latter involves users thinking of physical activities like “eating,” rather than specific words like “table,” to direct their chair to a nearby object. BCIs can also act as the channel for providing haptic feedback or haptic sensory replacement within prosthetics and exoskeletons for purposes of patient rehabilitation, regaining sensation, and an increased ability for patients to perform previously inaccessible tasks.
There are also efforts to connect BCIs with smart devices and the Internet of things (IoT), which could provide individuals experiencing neurological disorders or motor impairments with greater independence in the ability to perform daily living activities. These efforts could improve or sustain a user’s quality of life through increased accessibility within their home environment.
Beyond Medicine – BCIs and Commercial Wellness: BCIs are also starting to emerge in the commercial wellness space as a method of personal data tracking, intended as a means of improving cognitive abilities (such as attention) and/or mental and physical health (such as sleep monitoring). Many of these wellness BCIs overlap with functions included in the gaming and toy space. The NeuroSky Mindwave Mobile 2: Brainwave Starter Kit provides the user with information about their brain’s electrical impulses when relaxing and when listening to music. The product includes an EEG-fitted headband and connects to companion apps via Bluetooth. The device also provides training games purported to help improve meditation, attention, and enhance the user’s learning effectiveness. Further, the device includes tools for players to create their own brain-training games.
III. Health-related BCI Risks Include Security Breaches, Infringement on Mental Privacy, and Data Inaccuracy
Security Breaches: Security breaches are some of the most prominent risks in the health BCI space. Like other technology-based medical devices, BCIs are vulnerable to cyber risks. Researchers recently showed that hackers, through imperceptible noise variations of an EEG signal, could force BCIs to spell out certain words that do not align with the wearer’s actual thoughts or intentions. The consequence of these security vulnerabilities can range from user frustration to severe misdiagnosis and physical harm. Breaches of BCIs may also compromise sensitive health information that could be captured or inadvertently shared.
BCI Accuracy: An equally important risk among health-related BCIs is the extent to which device accuracy is verifiable and sufficient. In many applications, high reliability of medical BCIs is critical because inaccurate interpretation or modulation of a patient’s brain could result in serious consequences, including death. Patients relying on modulating BCIs to help mitigate cognitive disorders, such as epilepsy, could suffer grave health consequences if the BCI failed to work as intended and anticipated. Risks are particularly acute when patients rely on BCIs to communicate crucial information, such as their choices regarding treatment or even end-of-life decisions. Accuracy is also crucial to reliable, continuous accessibility, as prosthetic limbs, wheelchairs, and other devices controlled via BCIs must operate correctly and safely according to users’ intentions.
Infringement on Mental Privacy and BCI-informed Decision Making: Finally, BCIs also present privacy risks. These risks refer to unauthorized access to personal information, including the inferences drawn from an individual’s conscious or unconscious behaviors and intentions. In addition to the existing privacy risks around all personal health data, BCIs raise new mental privacy risks due to the capacity of the neural networks underpinning many of these devices to associate certain thoughts and the ability of BCIs to define and interpret subconscious or causally-connected intentions on a wider scale. For example, a BCI-controlled wheelchair and its underlying neural network might not only deduce that the user is thinking about food, therefore directing the chair to move toward the table, but also draw other conclusions about the individual’s biology and preferences, such as whether or not an individual is hungry or thirsty and at what times. These additional inferences capture new information about an individual’s thoughts, intentions, or interests, many of which are related to an individual’s specific biology and unique preferences.
Privacy risks are magnified when these new inferences are combined with other personal information to make decisions that impact the person’s life, potentially without their knowledge or consent. Organizations collecting and processing brain signals, leading to granular inferences tied to an individual, could have incentives to repurpose this data for unrequested treatments or non-medical purposes, many of which may expose potentially sensitive biological information to third parties. Additionally, the sharing of patient data associated with BCI use could potentially disclose an individual’s medical condition to employers, private companies, public entities, or governments.
IV. Some Health BCIs are Subject to Common Rule Requirements, FCC Oversight, or International Frameworks
Common Rule: Some of the advancements in health BCIs involve human subject research, which is governed by a complex regulatory framework. U.S. researchers whose projects are federally funded are typically required to obtain subjects’ informed consent for data collection based on approval from a Common Rule-based Institutional Review Board (IRB) prior to undertaking studies.
FCC Oversight: Wireless IoT BCI devices are likely subject to Federal Communications Commission (FCC) oversight because of their designation as connected wearables. However, given the lack of regulations around consumer wellness technologies, devices marketed outside of the physician regulated context—such as brain training games and meditation-aiding devices—may lack strict oversight. For example, the Health Insurance Portability and Accountability Act (HIPAA) regulates covered entities such as physicians and health insurers that collect, use, process, and share health information, but does not usually apply to wellness device companies.
International Frameworks: In Europe, the Global Data Protection Regulation (GDPR) is the applicable framework for any processing of personal data for the purposes of scientific research, including where the research relies on special categories of personal data, such as data related to health, and biometric data processed for identification. There are several lawful grounds for processing under Article 6(1) that would allow the necessary processing of personal data for BCI research, as well as several permissions under Article 9(2) for the use of sensitive personal data. In some situations, this could allow data controllers to conduct this type of research even without individual consent for the processing of the data, specifically when sensitive data is necessary for public health purposes or for research in the public interest; however, there are many complexities surrounding this sort of processing, with the European Data Protection Board (EDPB) expected to adopt Guidelines on processing of personal data for scientific research purposes in the near future. Given the complexities surrounding privacy in human subject research, health researchers and other stakeholders seeking to develop or adopt BCIs must understand and verify how the product fits into this shifting regulatory landscape.
The EU’s recently proposed draft AI regulation covers all AI systems, including those relying on biometric data—and is likely to be relevant for future regulation of personal neurodata, significantly altering the regulatory landscape around BCIs and neurotech. It specifically focuses on AI systems that pose high risks to individuals’ “health, safety and fundamental rights.” BCIs that might be considered “high risk” AI systems under the proposed regulation could trigger requirements prior to entering the market, such as going through a conformity assessment, adoption of adequate risk assessment, security guarantees, and adequate notice to the user, among others. If considered a “low risk” system, organizations would still have to fulfill transparency requirements. The full scope and impact of the EU’s AI regulation on the development and use of BCIs remains subject to the ongoing legislative process.
V. Conclusion
Health BCIs are set to influence and potentially improve healthcare by expanding accessibility and rehabilitation opportunities, as well as by giving medical practitioners new ways to diagnose and treat conditions. However, these applications are not without risk. The data flows that underpin medical BCIs raise privacy considerations, as well as risks in regard to how neurodata is secured and whether such data is accurate. Companies dealing with medical BCIs must remain abreast of these challenges and analyze how medical BCIs interact with a dynamic, global body of regulation.
Understanding why the first pieces fell in the transatlantic transfers domino
The Austrian DPA and the EDPS decided EU websites placing US cookies breach international data transfer rules
Two decisions issued by Data Protection Authorities (DPAs) in Europe and published in the second week of January 2022 found that two websites, one run by a contractor of the European Parliament (EP), and the other one by an Austrian company, have unlawfully transferred personal data to the US merely by placing cookies (Google Analytics and Stripe) provided by two US-based companies on the devices of their visitors. Both decisions looked into the transfers safeguards put in place by the controllers (the legal entities responsible for the websites), and found them to be either insufficient – in the case against the EP, or ineffective – in the Austrian case.
Both decisions affirm that all transfers of personal data from the EU to the US need “supplemental measures” on top of their Article 46 GDPR safeguards, in the absence of an adequacy decision and under the current US legal framework for government access to personal data for national security purposes, as assessed by the Court of Justice of the EU in its 2020 Schrems II judgment. Moreover, the Austrian case indicates that in order to be effective, the supplemental measures adduced to safeguard transfers to the US must “eliminate the possibility of surveillance and access [to the personal data] by US intelligence agencies”, seemingly putting to rest the idea of the “risk based approach” in international data transfers post-Schrems II.
This piece analyzes the two cases comparatively, considering they have many similarities other than their timing: they both target widely used cookies (Google Analytics, in addition to Stripe in the EP case), they both stem from complaints where individuals are represented by the Austrian NGO noyb, and it is possible that they will be followed by similar decisions from the other DPAs that received a batch of 101 complaints in August 2020 from the same NGO, relying on identical legal arguments and very similar facts. This piece analyzes the most important findings made by the two regulators, showing how their analyses were in sync and how these analyses likely preface similar decisions for the rest of the complaints.
1.“Personal data” is being “processed” through cookies, even if users are not identified and even if the cookies are thought to be “inactive”
In the first decision, the European Data Protection Supervisor (EDPS) investigated a complaint made by several Members of the European Parliament against a website made available by the EP to its Members and staff in the context of managing COVID-19 testing. The complainants raised concerns with regard to transfers of their personal data to the US through cookies provided by US based companies (Google and Stripe) and placed on their devices when accessing the COVID-19 testing website. The case was brought under the Data Protection Regulation for EU Institutions (EUDPR), which has identical definitions and overwhelmingly similar rules to the GDPR.
One of the key issues that was analyzed in order for the case to be considered falling under the scope of the EUDPR was whether personal data was being processed through the website by merely placing cookies on the devices of those who accessed it. Relying on its 2016 Guidelines on the protection of personal data processed through Web Services, the EDPS noted in the decision that “tracking cookies, such as the Stripe and Google Analytics cookies, are considered personal data, even if the traditional identity parameters of the tracked users are unknown or have been deleted by the tracker after collection”. It also noted that “all records containing identifiers that can be used to single out users, are considered as personal data under the Regulation and must be treated and protected as such”.
The EP argued in one of its submissions to the regulator that the Stripe cookie “had never been active, since registration for testing for EU Staff and Members did not require any form of payment”. However, the EP also confirmed that the dedicated COVID-19 testing website, which was built by its contractor, copied code from another website run by the same contractor, and “the parts copied included the code for a cookie from Stripe that was used for online payment for users” of the other website. In its decision, the EDPS highlighted that “upon installation on the device, a cookie cannot be considered ‘inactive’. Every time a user visited [the website], personal data was transferred to Stripe through the Stripe cookie, which contained an identifier. (…) Whether Stripe further processed the data transferred through the cookie is not relevant”.
With regard to the Google Analytics cookies, the EDPS only notes that the EP (as controller) acknowledged that the cookies “are designed to process ‘online identifiers, including cookie identifiers, internet protocol addresses and device identifiers’ as well as ‘client identifiers’”. The regulator concluded that personal data were therefore transferred “through the above-mentioned trackers”.
In the second decision, which concerned the use of Google Analytics by a website owned by an Austrian company and targeting Austrian users, the DPA argued in more detail what led it to find that personal data was being processed by the website through Google Analytics cookies, under the GDPR.
1.1 Cookie identification numbers, by themselves, are personal data
The DPA found that the cookies contained identification numbers, including a UNIX timestamp at the end, which shows when a cookie was set. It also noted that the cookies were placed either on the device or the browser of the complainant. The DPA affirmed that relying on these identification numbers makes it possible for both the website and Google Analytics “to distinguish website visitors … and also to obtain information as to whether the visitor is new or returning”.
In its legal analysis, the DPA noted that “an interference with the fundamental right to data protection … already exists if certain entities take measures – in this case, the assignment of such identification numbers – to individualize website visitors”. Analyzing the “identifiability” component of the definition of “personal data” in the GDPR, and relying on its Recital 26, as well as on Article 29 Working Party Opinion 4/2007 on the concept of “personal data”, the DPA clarified that “a standard of identifiability to the effect that it must also be immediately possible to associate such identification numbers with a specific natural person – in particular with the name of the complainant – is not required” for data thus processed to be considered “personal data”.
The DPA also recalled that “a digital footprint, which allows devices and subsequently the specific user to be clearly individualized, constitutes personal data”. The DPA concluded that the identification numbers contained in the cookies placed on the complainant’s device or browser are personal data, highlighting their “uniqueness”, their ability to single out specific individuals and rebutting specifically the argument the respondents made that no means are in fact used to link these numbers to the identity of the complainant.
1.2 Cookie identification numbers combined with other elements are additional personal data
However, the DPA did not stop here and continued at length in the following sections of the decision to underline why placing the cookies at issue when accessing the website constitutes processing of personal data. It noted that the classification as personal data “becomes even more apparent if one takes into account that the identification numbers can be combined with other elements”, like the address and HTML title of the website and the subpages visited by the complainant; information about the browser, operating system, screen resolution, language selection and the date and time of the website visit; the IP address of the device used by the complainant. The DPA considers that “the complainant’s digital footprint is made even more unique following such a combination [of data points]”.
The “anonymization function of the IP address” – which is a function that Google Analytics provides to users if they wish to activate it – was expressly set aside by the DPA, considering that during fact finding it was shown the function was not correctly implemented by the website at the time of the complaint. However, later in the decision, with regard to the same function and the fact that it was not implemented by the website, the regulator noted that “the IP address is in any case only one of many pieces of the puzzle of the complainant’s digital footprint”, hinting therefore that even if the function would have been correctly implemented, it wouldn’t have necessarily led to the conclusion that the data being processed was not personal.
1.3 Controllers and other persons “with lawful means and justifiable effort” will count for the identifiability test
Drilling down even more on the notion of “identifiability” in a dedicated section of the decision, the DPA highlights that in order for the data processed through the cookies at issue to be personal, “it is not necessary that the respondents can establish a personal reference on their own, i.e. that all information required for identification is with them. […] Rather, it is sufficient that anyone, with lawful means and justifiable effort, can establish this personal reference”. Therefore, the DPA took the position that “not only the means of the controller [the website in this case] are to be taken into account in the question of identifiability, but also those of ‘another person’”.
After recalling that the CJEU repeatedly found that “the scope of application of the GDPR is to be understood very broadly” (e.g. C-439/19B, C-434/16Nowak, C-553/07Rijkeboer), the DPA nonetheless stated that in its opinion, the term “anyone” it referred to above, and thus the scope of the definition of personal data, “should not be interpreted so broadly that any unknown actor could theoretically have special knowledge to establish a reference; this would lead to almost any information falling within the scope of application of the GDPR and a demarcation from non-personal data would become difficult or even impossible”.
This being said, the DPA considers that the “decisive factor is whether identifiability can be established with a justifiable and reasonable effort”. In the case at hand, the DPA considers that there are “certain actors who possess special knowledge that makes it possible to establish a reference to the complainant and identify him”. These actors are, from the DPA’s point of view, certainly the provider of the Google Analytics service and, possibly the US authorities in the national security area. As for the provider of Google Analytics, the DPA highlights that, first of all, the complainant was logged in with his Google account at the time of visiting the website.
The DPA indicates this is a relevant fact only “if one takes the view that the online identifiers cited above must be assignable to a certain ‘face’”. The DPA finds that such an assignment to a specific individual is in any case possible in the case at hand. As such, the DPA states that: “[…] if the identifiability of a website visitor depends only on whether certain declarations of intent are made in the account (user’s Google account – our note), then, from a technical point of view, all possibilities of identifiability are present”, since, as noted by the DPA, otherwise Google “could not comply with a user’s wishes expressed in the account settings for ‘personalization’ of the advertising information received”. It is not immediately clear how the ad preferences expressed by a user in their personal account are linked to the processing of data for Google Analytics (and thus website traffic measurement) purposes, and it seems that this was used in the argumentation to substantiate the claim that the second respondent generally has additional knowledge across its various services that could lead to the identification or the singling out of the website visitor.
However, following the arguments of the DPA, on top of the autonomous finding that cookie identification numbers are personal data, it seems that even if the complainant wouldn’t have been logged into his account, the data processed through the Google Analytics cookies would have still been considered personal. In this context, the DPA “expressly” notes that “the wording of Article 4(1) of the GDPR is unambiguous and is linked to the ability to identify and not to whether identification is ultimately carried out”.
Moreover, “irrespective of the second respondent” – so even if Google admittedly did not have any possibility or ability to render the complainant identifiable or to single him out, other third parties in this case were considered to have the potential ability to identify the complainant: US authorities.
1.4 Additional information potentially available to US intelligence authorities, taken into account for the identifiability test
Lastly, according to the decision, the US authorities in the national security area “must be taken into account” when assessing the potential of identifiability of the data processed through cookies in this case. The DPA considers that “intelligence services in the US take certain online identifiers, such as the IP address or unique identification numbers, as a starting point for monitoring individuals. In particular, it cannot be ruled out that intelligence services have already collected information with the help of which the data transmitted here can be traced back to the person of the complainant.”
To show that this is not merely a “theoretical danger”, the DPA relies on the findings of the CJEU in Schrems II with regard to the US legal framework and the “access possibilities” it offers to authorities, and on Google’s Transparency Report, “which proves that data requests are made to [it] by US authorities.” The regulator further decided that even if it is admittedly not possible for the website to check whether such access requests are made in individual cases and with regard to the visitors of the website, “this circumstance cannot be held against affected persons, such as the complainant. Thus, it was ultimately the first respondent as the website operator who, despite publication of the Schrems II judgment, continued to use the Google Analytics tool”.
Therefore, based on the findings of the Austrian DPA in this case, at least two of the “any persons” mentioned in Recital 26 GDPR that will be considered when deciding who can have lawful means to identify data so that the data is deemed personal are the processor of a specific processing operation, as well as the national security authorities that may have access to that data, at least in cases where this access is relevant (like in international data transfers). This latter finding of the DPA raises questions whether national security agencies in general in a specific jurisdiction may be considered by DPAs as an actor who has “lawful means” and additional knowledge when deciding if a data set links to an “identifiable” person, also in cases where international data transfers are not at issue.
The DPA concluded that the data processed by the Google Analytics cookies is personal data and falls under the scope of the GDPR. Importantly, the cookie identification numbers were found to be personal data by themselves. Additionally, the other data elements potentially collected through cookies together with the identification numbers are also personal data.
2.Data transfers to the US are taking place by placing cookies provided by US-based companies on EU-based websites
Once the supervisory authorities established that the data processed through Google Analytics and, respectively, Stripe cookies, were personal data and were covered by the GDPR or EUDPR respectively, they had to ascertain whether an international transfer of personal data from the EU to the US was taking place in order to see whether the provisions relevant to international data transfers were applicable.
The EDPS was again concise. It stated that because the personal data were processed by two entities located in the US (Stripe and Google LLC) on the EP website, “personal data processed through them were transferred to the US”. The regulator strengthened its finding by stating that this conclusion “is reinforced by the circumstances highlighted by the complainants, according to which all data collected through Google Analytics is hosted (i.e. stored and further processed) in the US”. For this particular finding, the EDPS referred, under footnote 27 of the decision, to the proceedings in Austria “regarding the use of Google Analytics in the context of the 101 complaints filed by noyb on the transfer of data to the US when using Google Analytics”, in an evident indication that the supervisory authorities are coordinating their actions.
In turn, the Austrian DPA applied the criteria laid out by the EDPB in its draft Guidelines 5/2021 on the relationship between the scope of Article 3 and Chapter V GDPR, and found that all the conditions are met. The administrator of the website is the controller and it is based in Austria, and, as data exporter, it “disclosed personal data of the complainant by proactively implementing the Google Analytics tool on its website and as a direct result of this implementation, among other things, a data transfer to the second respondent to the US took place”. The DPA also noted that the second respondent, in its capacity as processor and data importer, is located in the US. Hence, Chapter V of the GDPR and its rules for international data transfers are applicable in this case.
However, it should also be highlighted that, as part of fact finding in this case, the Austrian DPA noted that the version of Google Analytics subject to this case was provided by Google LLC (based in the US) until the end of April 2021. Therefore, for the facts of the case which occurred in August 2020, the relevant processor and eventual data importer was Google LLC. But the DPA also noted that since the end of April 2021, Google Analytics has been provided by Google Ireland Limited (based in Ireland).
One important question that remains for future cases is whether, under these circumstances, the DPA would find that an international data transfer occurred, considering the criteria laid out in the draft EDPB Guidelines 5/2021, which specifically require (at least in the draft version, currently subject to public consultation) that “the data importer is located in a third country”, without any further specifications related to corporate structures or location of the means of processing.
2.1 In the absence of an adequacy decision, all data transfers to the US based on “additional safeguards”, like SCCs, need supplementary measures
After establishing that international data transfers occurred from the EU to the US in the cases at hand, the DPAs assessed the lawful ground for transfers used.
The EDPS noted that EU institutions and bodies “must remain in control and take informed decisions when selecting processors and allowing transfers of personal data outside the EEA”. It followed that, absent an adequacy decision, they “may transfer personal data to a third country only if appropriate safeguards are provided, and on condition that enforceable data subject rights and effective legal remedies for data subjects are available”. Noting that the use of Standard Contractual Clauses (SCCs) or another transfer tool do not substitute individual case-by-case assessments that must be carried out in accordance with the Schrems II judgment, the EDPS stated that EU institutions and bodies must carry out such assessments “before any transfer is made”, and, where necessary, they must implement supplemental measures in addition to the transfer tool.
The EDPS recalled some of the key findings of the CJEU in Schrems II, in particular the fact that “the level of protection of personal data in the US was problematic in view of the lack of proportionality caused by mass surveillance programs based on Section 702 of the Foreign Intelligence Surveillance Act (FISA) and Executive Order (EO) 12333 read in conjunction with Presidential Policy Directive (PPD) 28 and the lack of effective remedies in the US essentially equivalent to those required by Article 47 of the Charter”.
Significantly, the supervisory authority then affirmed that “transfers of personal data to the US can only take place if they are framed by effective supplementary measures in order to ensure an essentially equivalent level of protection for the personal data transferred”. Since the EP did not provide any evidence or documentation about supplementary measures being used on top of the SCCs it referred to in the privacy notice on the website, the EDPS found the transfers to the US to be unlawful.
Similarly, the Austrian DPA in its decision recalled that the CJEU “already dealt” with the legal framework in the US in its Schrems II judgment, as based on the same three legal acts (Section 702 FISA, EO 12333, PPD 28). The DPA merely noted that “it is evident that the second respondent (Google LLC – our note) qualifies as a provider of electronic communications services” within the meaning of FISA Section 702. Therefore, it has “an obligation to provide personally identifiable information to US authorities pursuant to 50 US Code §1881a”. Again, the DPA relied on Google’s Transparency Report to show that “such requests are also regularly made to it by US authorities”.
Considering the legal framework in the US as assessed by the CJEU, just like the EDPS did, the Austrian DPA also concluded that the mere entering into SCCs with a data importer in the US cannot be assumed to ensure an adequate level of protection. Therefore, “the data transfer at issue cannot be based solely on the standard data protection clauses concluded between the respondents”. Hence, supplementary measures must be adduced on top of the SCCs. The Austrian DPA relied significantly on the EDPB Recommendation 1/2020 on measures that supplement transfer tools when analyzing the available supplementary measures put in place by the respondents.
2.2 Supplementary measures must “eliminate the possibility of access” of the government to the data, in order to be effective
When analyzing the various measures put in place to safeguard the personal data being transferred, the DPA wanted to ascertain “whether the additional measures taken by the second respondent close the legal protection gaps identified in the CJEU [Schrems II] ruling – i.e. the access and monitoring possibilities of US intelligence services”. Setting this as a target, it went on to analyze the individual measures proposed.
The contractual and organizational supplementary measures considered in the case:
notification of the data subject about data requests (should this be permissible at all in individual cases),
the publication of a transparency report,
the publication of guidelines “for handling government requests”,
careful consideration of any data requests.
The DPA considered that “it is not discernable” to what extent these measures are effective to close the protection gap, taking into account that the CJEU found in the Schrems II judgment that even “permissible (i.e. legal under US law) requests from US intelligence agencies are not compatible with the fundamental right to data protection under Article 8 of the EU Charter of Fundamental Rights”.
The technical supplementary measures considered were:
the protection of communications between Google services,
the protection of data in transit between data centers,
the protection of communications between users and websites,
“on-site security”,
encryption technologies, for example encryption of data at rest in data centers,
processing pseudonymous personal data.
With regard to encryption as one of the supplementary measures being used, the DPA took into account that a data importer covered by Section 702 FISA, as is the case in the current decision, “has a direct obligation to provide access to or surrender such data”. The DPA considered that “this obligation may expressly extend to the cryptographic keys without which the data cannot be read”. Therefore, it seems that as long as the keys are kept by the data importer and the importer is subject to the US law assessed by the CJEU in Schrems II (FISA Section 702, EO 12333, PPD 28), encryption will not be considered sufficient.
As for the argument that the personal data being processed through Google Analytics is “pseudonymous” data, the DPA rejected it relying on findings made by the Conference of German DPAs that the use of cookie IDs, advertising IDs, and unique user IDs does not constitute pseudonymization under the GDPR, since these identifiers “are used to make the individuals distinguishable and addressable”, and not to “disguise or delete the identifying data so that data subjects can no longer be addressed” – which the Conference considers to be one of the purposes of pseudonymization.
Overall, the DPA found that the technical measures proposed were not enough because the respondents did not comprehensively explain (therefore, the respondents had the burden of proof) to what extent these measures “actually prevent or restrict the access possibilities of US intelligence services on the basis of US law”.
With this finding, highlighted also in the operative part of the decision, the DPA seems to de facto reject the “risk based approach” to international data transfers, which has been specifically invoked during the proceedings. This is a theory according to which, for a transfer to be lawful in the absence of an adequacy decision, it is sufficient to prove the likelihood of the government accessing personal data transferred on the basis of additional safeguards is minimal or reduced in practice for a specific transfer, regardless of the broad authority that the government has under the relevant legal framework to access that data and regardless of the lack of effective redress.
The Austrian DPA is technically taking the view that it is not sufficient to reduce the risk of access to data in practice, as long as the possibility to access personal data on the basis of US law is actually not prevented, or in other words, not eliminated. This conclusion is apparent also from the language used in the operative part of the decision, where the DPA summarizes its findings as such: “the measures taken in addition to the SCCs … are not effective because they do not eliminate the possibility of surveillance and access by US intelligence agencies”.
If other DPAs confirm this approach for transfers from the EU to the US in their decisions, the list of potentially effective supplemental measures for transfers of personal data to the US will remain minimal – prima facie, it seems that nothing short of anonymization (per the GDPR standard) or any other technical measure that will effectively and physically eliminate the possibility of accessing personal data by US national security authorities will suffice under this approach.
A key reminder here is that the list of supplementary measures detailed in the EDPB Recommendation concerns all international data transfers based on additional safeguards, to all third countries in general, in the absence of an adequacy decision. In the decision summarized here, the supplementary measures found to be ineffective concern their ability to cover “gaps” in the level of data protection of the US legal framework, as resulting from findings of the CJEU with regard to three specific legal acts (FISA Section 702, EO 12333 and PPD 28). Therefore, the supplementary measures discussed and their assessment may be different for transfers to another jurisdiction.
2.3 Are data importers liable for the lawfulness of the data transfer?
One of the most consequential findings of the Austrian DPA that may have an impact on international data transfers cases moving forward is that “the requirements of Chapter V of the GDPR must be complied with by the data exporter, but not by the data importer” – therefore, under this interpretation, the organizations that are on the receiving end of a data transfer, at least when they are a processor for the data exporter like in the present case, cannot be found in breach of the international data transfers obligations under the GDPR. The main argument used was that “the second respondent (as data importer) does not disclose the personal data of the complainant, but (only) receives them”. As a result, Google was found not to breach Article 44 GDPR in this case.
However, the DPA did consider that it is necessary to look further, and as part of separate proceedings, into how the second respondent complied with its obligations as a data processor, and in particular the obligation to process personal data on documented instructions from the controller, including with regard to transfers of personal data to a third country or an international organization, as detailed in Article 28(3)(a) and Article 29 GDPR.
3.Sanctions and consequences: Between preemptive deletion of cookies, reprimands and blocking transfers
Another commonality of the two decisions summarized is that neither of them resulted in a fine. The EDPS issued a reprimand against the European Parliament for several breaches of the EUDPR, including those related to international data transfers “due to its reliance on the Standard Contractual Clauses in the absence of a demonstration that data subjects’ personal data transferred to the US were provided an essential equivalent level of protection”. It is significant to mention that the EP asked the website service provider to disable both Google Analytics and Stripe cookies in a matter of days after being contacted by the complainants on October 27, 2020. The cookies at issue were active between September 30, when the website became available, and November 4, 2020.
In turn, the Austrian DPA found that “the Google Analytics tool (at least in the version of August 14, 2020) can thus not be used in compliance with the requirements of Chapter V GDPR”. However, as discussed above, the DPA found that only the website operator – as the data exporter – was in breach of Article 44 GDPR. The DPA decided not to issue a fine in this case.
However, the DPA pursues to impose a ban on the data transfers or a similar order against the website, with some procedural complications. In the middle of the proceedings, the Austrian company that was in charge of managing the website transferred the responsibility of operating it to a company based in Germany, therefore the website is not under its control any longer. But since the DPA noted that Google Analytics continued to be implemented on the website at the time of the decision, it resolved to refer the case to the competent German supervisory authority with regard to the possible use of remedial powers against the new operator.
Therefore, it seems that stopping the transfer of personal data to the US without appropriate safeguards seems to be the focus in these cases, rather than sanctioning the data exporters. The parties have the possibility to challenge both decisions before their respective competent Court and require a judicial review within a limited period of time, but there are no indications yet whether this will happen.
4. The big picture: 101 complaints and collaboration among DPAs
The decision published by the Austrian DPA is the first one in the 101 complaints that noyb submitted directly to 14 DPAs across Europe (EU and the European Economic Area) at the same time in August 2020, from Malta, to Poland, to Lichtenstein, with identical legal arguments centered on international data transfers to the US through the use of Google Analytics or Facebook Connect, and all against websites of local or national relevance – so most likely these complaints will be considered outside the One-Stop-Shop mechanism.
The bulk of the 101 complaints were submitted to the Austrian DPA (about 50), either immediately under its competence, as in the analyzed case, or as part of the One-Stop-Shop mechanism where the Austrian DPA acts as the concerned DPA from the jurisdiction where the complainant resides, which likely needed to forward the cases to the many lead DPAs in the jurisdictions were the targeted websites have their establishment. This way, even more DPAs will have to make a decision in these cases – from Cyprus, to Greece, to Sweden, Romania and many more. About a month after the identical 101 complaints were submitted, the EDPB decided to create a taskforce to “analyse the matter and ensure a close cooperation among the members of the Board”.
In contrast, the complaint against the European Parliament was not part of this set, it was submitted separately at a later date to the EDPS, but relying on similar arguments on the issue of international data transfers to the US through Google Analytics and Stripe cookies. Even if it was not part of the 101 complaints, it is clear that the authorities indeed cooperated or communicated, with the EDPS making a direct reference to the Austrian proceedings, as shown above.
In other signs of cooperation, both the Dutch DPA and the Danish DPA have published notices immediately after the publication of the Austrian decision to alert organizations that they may soon issue new guidance in relation to the use of Google Analytics, specifically referring to the Austrian case. Of note, the Danish DPA highlighted that “as a result of the decision of the Austrian DPA” it is now “in doubt whether – and how – such tools can be used in accordance with data protection law, including the rules on transfers of personal data to third countries”. It also called for a common approach of DPAs on this issue: “it is essential that European regulators have a common interpretation of the rules”, since data protection law “intends to promote the internal market”.
In the end, the DPAs are applying findings from a judgment made by the CJEU, which has ultimate authority in the interpretation of EU law that must be applied across all EU Member States. All this indicates that it is likely a series of similar decisions will be successively published in the short to medium future, with small chances of seeing significant variations. This is why these two cases summarized here can be seen as the first two pieces that fell in a domino.
This domino, though, will not only be about the 101 cases and the specific cookies they target – it eventually concerns all US based service providers and businesses that receive personal data from the EU potentially covered by the broad reach of FISA Section 702 and EO 12333; all EU based organizations, from website operators, to businesses, schools, and public agencies, that use the services provided by the former or engage them as business partners, and disclose personal data to them; and it might as well affect all EU based businesses that have offices and subsidiaries in the US and that make personal data available to these entities.
5 Tips for Protecting Your Privacy Online
Today, almost everything we do online involves companies collecting personal information about us. Personal data is collected and regularly used for a number of reasons – like when you use social media accounts, when you shop online or redeem digital coupons at the store, or when you search the internet.
Sometimes, information is collected about you by one company, and then shared or sold to another. While data collection can offer benefits to both you and businesses – like connecting with friends, getting directions, or sales promotions – it can also be used in ways that are intrusive – unless you take control.
There are many ways you can protect your personal data and information and control how it is shared and used. On this Data Privacy Day – recognized annually on January 28 to mark the anniversary of Convention 108, the first binding international treaty to protect personal data– the Future of Privacy Forum and other organizations are raising awareness and promoting best practices for data privacy.
For the second year in a row, FPF is partnering with Snap to provide a privacy-themed Snap filter to spread awareness of the importance of data privacy to your networks. Scan the Snapcode below to check it out:
Share the pictures you took using our interactive lens on social media using the hashtag #FPFDataPrivacyDay2022.
You should know that there are steps you can take to better protect your privacy online. Below, we’ve listed five tips you can follow to better protect your privacy when using your mobile device.
1. Check Your Privacy Settings
Many social media sites include options on how you can tailor your privacy settings to limit the ways data is collected or used. Snap provides privacy options that control who can contact you, and many other options. Start with the Snap Privacy Center to review your settings. You can find those choices here.
Snap provides options for you to view any data they have collected about you, including the date your account was created and the devices that have access to your account. Downloading your data allows you to view the types of information that has been collected and modify your settings accordingly.
Instagram allows you to manage a variety of privacy settings, including who has access to your posts, who can comment on or like your post, and manage what happens to posts after you delete them. You can view and change your settings here.
TikTok allows you to decide between public and private accounts, decide which accounts can view posted videos, and allows you to change your personalized ad settings. You can check your settings here.
Twitter allows you to manage if they share your information with third-party businesses, if the site can track your internet browsing outside of Twitter, and allows you to choose if you’d like ads to be tailored to you. Check your settings here.
Facebook provides a range of privacy settings that can be found here.
What other apps do you use often? Check to see which settings they provide!
2. Limit Sharing of Location Data
Most social media sites will ask for access to your location data. Do they need it for some reason that is obvious, like helping you with directions or showing your nearby friends? Feel free to say no. And be aware that location data is often used to tailor ads and recommendations based on locations you have recently visited. Allowing access to location services may also permit the sharing of location information with third-parties.
Snap has a variety of ways to control who is able to view your location. On their settings page, you can select whether no one, just select users, or all friends will be able to view your location on Snap Map. You can also choose to deny individual users from viewing your location.
To check the location permissions allowed to social media sites on an iPhone or Android, follow the below steps.
Navigate to “Settings”, then “Location,” and then “App Permissions”
Select the social media app you’d like to prevent from accessing your location
Make sure “Don’t Allow” is selected or “Allow only while using the app”.
3. Keep Your Devices & Apps Up to Date
Keeping software current and up to date is the only way to make sure that your device is protected against the latest software vulnerabilities. Having the latest security software, web browser, and operating system installed is the best way to protect against various online threats. By enabling automatic updates on your devices, you can be sure that your apps and operating system are always up to date.
Users can check the status of their operating systems in the settings app. For iPhone users, navigate to “Software Update,” and for Android devices, look for the “Security” page in settings.
4. Use a Password Manager
Utilizing a strong and secure password for each web-based account you have helps ensure personal data and information are protected from unauthorized use. It can be difficult to remember complex passwords for every account and using a password manager can help. Password managers save passwords as you create and log in to your accounts, often alerting you of any duplicates and suggesting the creation of a stronger password. For example, when signing up for new accounts and services, if you use an Apple product, you can allow your iPhone, Mac, or iPad to generate strong passwords and safely store them in iCloud Keychain for later access. Some of the best third-party password managers can be found here.
5. Enable Two-Factor Authentication
Two-factor authentication adds an additional layer of protection to your accounts. The first authentication is the normal username and password combination that has been used for years. The second factor is either a text message or email including a code that is sent to a personal device. This added step makes it harder for malicious actors to gain access to your accounts. Two-factor authentication only adds a few seconds to your day, but can save you from the headache and harm that comes from compromised accounts. To be even safer, use an authenticator app as your second factor.
As many of us continue to work and learn remotely, it’s important to stay aware of the information you share on and offline. Remember to adjust your settings regularly, staying on top of any privacy changes and updates made on the web applications you use daily. Take charge of protecting your personal data and encourage others to look at the information they may be sharing. By adjusting your settings and making changes to your web accounts and devices, you can better maintain the security and privacy of your personal data.
If you’re interested in learning more about one of the topics discussed here or about other issues that are driving the future of privacy, sign up for our monthly briefing, check out one of our upcoming events, or follow us on Twitter and LinkedIn. FPF brings together some of the top minds in privacy to discuss how we can all benefit from the insights gained from data, while respecting the individual right to privacy.
Five Burning Questions (and Zero Predictions) for the U.S. State Privacy Landscape in 2022
Entering 2022, the United States remains one of the only major economic powers that lacks a comprehensive, national framework governing the collection and use of consumer data throughout the economy. An ongoing impasse in federal efforts to advance privacy legislation has created a vacuum that state lawmakers, seeking to secure privacy rights and protections for their constituents, are actively working to fill.
Last year we saw scores of comprehensive privacy bills introduced in dozens of states, though when the dust settled, only Virginia and Colorado had joined California in successfully enacting new privacy regimes. Now, at the outset of a new legislative calendar, many state legislatures are positioned to make progress on privacy legislation. While stakeholders are eager to learn which (if any) states will push new laws over the finish line, it remains too early in the lawmaking cycle to make such predictions with confidence. So instead, this post explores five key questions about the state privacy landscape that will determine whether 2022 proves to be a pivotal year for the protection of consumer data in the United States.
1. Will A Single (State) Framework Emerge Supreme?
A common refrain heard in the U.S. privacy debate is that each state creating its own data privacy rules threatens to create a confusing and costly “patchwork” of divergent laws. While some degree of tension between different state privacy laws is already baked into the landscape, regulated entities may be hoping that a particular regulatory approach emerges as an interoperable norm across the states. Some of the likely contenders for this title are laid out below.
California Model
California was the first mover on comprehensive privacy legislation, enacting the California Consumer Privacy Act (CCPA) in June 2018. At the time, many observers predicted that the “California effect” would establish the CCPA as a de-facto national standard and drive the adoption of similar laws throughout the nation (reminiscent of breach reporting statutes in the 2000s). True to form, 2019 and 2020 saw dozens of CCPA-style copycat bills introduced; however, no such bill has yet proven successful. One possible reason is that California’s approach to privacy has been something of a ‘moving target’ – having undergone multiple amendments, an extended Attorney General rulemaking process, the conversion of the CCPA into the California Privacy Rights Act (CPRA) by ballot initiative, and the recent launch of a new CPRA rulemaking process.
Virginia/Colorado Model
In 2021, a new challenger appeared with the enactment of the Virginia Consumer Data Protection Act (VCDPA) and the Colorado Privacy Act (CPA). While containing multiple important distinctions (that will be explored in a subsequent post), these laws generally adhere to the same basic framework for establishing consumer privacy rights and dividing business obligations between data “controllers” and “processors.” The Virginia/Colorado model also exceeds California in certain key areas, including by requiring affirmative consent for the processing of “sensitive” personal data. As a result, this framework could represent a more stable approach to protecting privacy than California that may be palatable to consumer and industry stakeholders alike.
Other Models
While California and the Virginia/Colorado models are the clear favorites, they are not the full field of contenders that could emerge as the dominant U.S. privacy framework. Last July, the Uniform Law Commission (ULC) finalized its model privacy law, the “Uniform Personal Data Protection Act,” which has already been introduced in the District of Columbia (CB 24-451), Nebraska (LB 1188), and Oklahoma (HB 3447). Notably, the ULC model significantly conflicts with established privacy frameworks and has received reactions ranging from skepticism to hostility from both industry and consumer advocacy groups, creating questions about its political viability.
There is also pending legislation in several states that, if enacted, would constitute distinct regulatory approaches from the adopted laws. For example, there are bills to watch in Massachusetts (S 46) (establishing fiduciary-style obligations on businesses); New Jersey (A 505) (including a ‘legitimate interest’ basis for data processing); and Oklahoma (HB 2969) (containing expansive use limitation requirements).
In surveying the state privacy bills introduced this year, a clear divide between the California and Colorado/Virginia frameworks is evident. State bills in Alaska (HB 222) and Indiana (HB 1261) include California-style rights for consumers to opt-out of the sale and sharing of personal information and to limit the use and disclosure of sensitive personal information. Elsewhere in Hawaii (SB 2797) and Pennsylvania (HB 2257), legislative proposals more closely follow the Virginia/Colorado approach to requiring affirmative consent for processing “sensitive data” in addition to creating opt-out rights for data sales, targeted advertising, and profiling.
2. Where Will Regulatory Processes Lead?
While much attention will be paid to the state legislative horse race, two states with laws on the books will undertake important privacy rulemaking processes this year. In California, the newly constituted California Privacy Protection Agency (CPPA) is directed to conduct a wide-ranging rulemaking that will clarify key definitions and compliance issues left open under the CPRA. Rulemaking subjects include the CPRA’s new right of correction, valid uses of data for ‘business purposes,’ and the application of the law to automated decision-making processes. In Colorado, the Attorney General has similarly been delegated broad rulemaking authority and is specifically tasked with the adoption of “rules that detail the technical specifications for one or more universal opt-out mechanisms” (discussed further below).
California and Colorado’s rulemaking processes will likely have significant impacts on the ultimate implementation and exercise of consumers’ new privacy rights in these states. Furthermore, while the CPRA and CPA statutes specifically direct the development of rules governing certain issues, their grants of rulemaking authority are open-ended, meaning that final regulations may potentially broaden the consumer rights and business compliance obligations established under these laws. However, such an expansive regulatory approach would likely be strongly contested. For example, the CPPA’s request for comment on preliminary rulemaking activity surfaced significant fault lines in stakeholder expectations for what CPRA rulemaking can and should entail for significant elements of the law.
Not all new state privacy laws will necessarily provide for open-ended rulemaking processes and Virginia’s privacy law lacks a rulemaking process entirely. Privacy bills under consideration in 2022 have largely followed an ‘all-or-nothing’ approach to rulemaking with legislation such as Maryland (SB 11) and Washington (HB 1850) seeking to give the state Attorney General or other regulators broad rulemaking authority and bills like Ohio (HB 376) providing for no rulemaking at all. Going forward, the inclusion of rulemaking authority in new privacy laws could create additional divergences between different state approaches. However, rulemaking may also help state laws remain flexible in light of changing technology and allow lawmakers to delegate some of the more nuanced technical issues to experts with the benefit of public participation.
3. How will State Activity Impact the Federal Debate?
Despite the introduction of over a dozen federal bills and numerous hearings since 2018, bipartisan federal collaboration on comprehensive privacy legislation has repeatedly stalled out. Key lawmakers remain divided over critical issues such as private rights of action, preemption, and how to regulate against discriminatory uses of data.
Advancements in privacy at the state level will likely breathe new life into the dormant federal debate – but its impact remains uncertain. One possibility is that the adoption of additional state privacy laws may ultimately create so much regulatory complexity for industry that breakthrough on federal privacy legislation becomes inevitable.
Alternatively, the enactment of even a single state law that contains a broad private right of action may push concerned industry stakeholders towards compromise over a federal privacy bill. Most industry participants view private lawsuits as particularly ‘Ill-Suited’ for the privacy context, and no state has yet enacted comprehensive privacy legislation providing for expansive private lawsuits. A range of approaches to the issue of private lawsuits have been taken in the legislation under consideration this year. In addition to bills that would establish expansive causes of action such as New York (S 6701) or explicitly disclaim such suits like Florida (SB 1864), some bills would restrict lawsuits to particular violations like Florida (HB 9) or permit lawsuits but restrict statutory damages such as Washington State (SB 5813).
Finally, the successful enactment of state privacy laws containing novel approaches to protecting privacy could inform new legislative proposals at the federal level. Given that the only states to enact comprehensive privacy laws have had (at the time) unified Democratic governments, the adoption of a privacy law by a Republican-led state could impact the contours of the federal conversation. Serious efforts to enact privacy legislation have been undertaken in Republican controlled state legislatures in Florida, Ohio, and Oklahoma, with more likely on the way.
4. Will ‘Universal’ Privacy Controls be the Next Big Thing?
Many stakeholders have expressed concern that leading privacy frameworks rely too heavily on individual controls and consent options that are overwhelming and unscalable for ordinary consumers in practice. One response to this criticism has been the development and legal recognition of ‘user-selected universal opt-out mechanisms,’ often exercised through browser settings or plug-ins, that signal a consumer’s request to exercise their privacy rights to the websites they visit. Under present law, such privacy controls are omitted from the VCDPA; recognized, but not clearly mandated under the CPRA; and will be required in Colorado come 2024.
As a newer approach to expressing privacy preferences, stakeholders have raised questions about the legal and practical effects that this class of ‘universal’ controls should carry. For example, how businesses should respond if they receive multiple, conflicting signals from different browsers or devices used by the same person. Furthermore, the potential development of separate processes governing the adoption of new signal mechanisms and likely state-by-state differences in the underlying privacy rights these controls will exercise could further complicate their use.
Nevertheless, ‘universal’ privacy controls represent a significant opportunity to advance consumer privacy interests and appear poised to become an increasingly prominent aspect of the privacy debate in the years to come. At present, the majority of active state bills would give businesses flexibility in determining context-appropriate methods for the exercise of consumers privacy rights including in Florida (SB 1864) and Kentucky (SB 15). However, bills in Maryland (SB 11) and Alaska (HB 159) would join Colorado in providing for the mandatory recognition of such signals.
5. Will Sectoral Privacy Laws Lead the Way?
This post has focused on ‘comprehensive’ privacy legislation, broad-based legal frameworks that would establish baseline, industry and technology neutral rules for the protection of personal data throughout a state’s economy. However, state lawmakers are also on track to propose hundreds of more narrowly focused privacy bills that would regulate either particular industries such as data brokers (Delaware HB 262) or ISPs (New York S 3885); categories of information such as childrens’ data (Washington State HB 1697) or biometrics (Kentucky HB 32); or establish specific business obligations such as reasonable security practices (West Virginia HB 2925) or transparency requirements (New Jersey A 1971). While some of these proposals are particularly narrow or limited in scope (for example, establishing a commission to study a particular issue), others could serve as both templates and catalysts for sweeping change in Americans’ privacy expectations and outcomes.
Conclusion
This commentary has noted several states where privacy legislation is already under serious consideration for the 2022 legislative calendar. However, the past informs us that fast-shifting local political dynamics can kick up surprises for state privacy efforts. Last year’s adoption of new privacy laws in Colorado and Virginia took many observers by surprise, and successful legislation may emerge from unexpected jurisdictions again this year. This post has posed many questions but can offer only one clear forecast: a turbulent and exciting year for consumer privacy legislation is just beginning. Be sure to follow the Future of Privacy Forum for updates on the U.S. privacy landscape throughout the year.
Addressing the Intersection of Civil Rights and Privacy: Federal Legislative Efforts
Last month, the National Telecommunications and Information Administration (NTIA) hosted virtual listening sessions on the intersection of data privacy, equity, and civil rights. Around the same time, the FTC announced that they will begin rulemaking on discriminatory practices in automated decision making, and currently, an influx of state legislation containing civil rights provisions have been introduced.
Decades of research demonstrate the effects of data processing on existing structural inequalities such as race, gender, and disability, and there have been numerous attempts by federal and state governments to regulate the disparate impacts of data practices on protected classes. Though the intersection of data privacy and civil rights has been discussed in policy circles for years, these bills containing civil rights provisions have been surprisingly under-analyzed.
In the coming weeks and months, FPF will be publishing a blog series to provide an informative overview of government efforts to regulate discriminatory data practices through proposed legislation and executive agency enforcement. This blog is the first in the series and will cover federal legislative efforts.
In sum:
In recent years, both Democrats and Republicans have introduced several comprehensive data privacy bills that would prohibit data processing that violates anti-discrimination laws. There is party division in areas of auditing/reporting burdens and enforcement.
There is also division on the scope of civil rights protections. While some proposals intend to apply data processing activities to what is prohibited under the existing federal anti-discrimination framework, others propose effectively expanding civil rights laws, such as expanding the definition of “protected classes” and extending public accommodation law (which has traditionally only applied to physical spaces) to online sellers of goods and services.
Some representatives and advocates remain concerned about the effects and enforcement of adtech and targeted advertising on marginalized and vulnerable populations.
Leading Federal Comprehensive Data Privacy Bills
Members of Congress have introduced a number of comprehensive data privacy bills in recent years, some of which contain civil rights provisions. The leading proposals from Democratic and Republican leaders in the Senate Commerce Committee are the Consumer Online Privacy Rights Act (COPRA) and the SAFE DATA (Setting an American Framework to Ensure Data Access, Transparency, and Accountability Act).
Table 1 (below) provides a helpful comparison of the key civil rights provisions in each bill. In general, COPRA contains more comprehensive civil rights provisions than the SAFE DATA Act, which mainly codifies unlawful data processing activities under federal anti-discrimination laws and permits the FTC to inform other agencies about potential violations.
Under COPRA, it would be unlawful to conduct discriminatory data processing in areas covered by federal anti-discrimination laws, such as housing, employment, and education, on the basis of a protected class. Protected classes would include those already protected under the law (race, sex, disability, etc.), as well as include new ones such as source of income, familial status, and biometric information. COPRA would also require entities to conduct impact assessments on the accuracy, bias, and potential discrimination of their algorithms. Violations of the law would be enforced through the FTC, state AGs, or through a private right of action, where a plaintiff could recover up to $1,000 per violation per day. Small businesses, however, would be exempt. In comparison (see Table 1), the SAFE DATA Act contains few civil rights provisions.
Table 1.
COPRA, Section 108
SAFE DATA, Section 201
Discrimination Provisions
A covered entity shall not process or transfer covered data on the basis of [protected class] for the purpose of:
(A) advertising, marketing, soliciting, offering, selling, leasing, licensing, renting, or otherwise commercially contracting for a housing, employment, credit, or education opportunity, in a manner that unlawfully discriminates against or otherwise makes the opportunity unavailable to the individual or class of individuals; OR
(B) in a manner that unlawfully segregates, discriminates against, or otherwise makes unavailable to the individual or class of individuals the goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation.
Whenever the Commission obtains information that a covered entity may have processed or transferred covered data in violation of Federal anti-discrimination laws, the Commission shall transmit such information…to the appropriate Executive agency or State agency with authority to initiate proceedings relating to such violation.
Algorithmic Decision-making
[A] covered entity engaged in algorithmic decision-making…to make or facilitate advertising for housing, education, employment or credit opportunities…or restrictions on the use of, any place of public accommodation, must annually conduct an impact assessment of such algorithmic decision-making that—
(A) describes and evaluates the development of the covered entity’s algorithmic decision-making processes including the design and training data used to develop the algorithmic decision-making process, how the algorithmic decision-making process was tested for accuracy, fairness, bias, and discrimination; and
(B) assesses whether the algorithmic decision-making system produces discriminatory results on the basis of an individual’s or class of individuals’ [protected class]
The Commission shall conduct a study…examining the use of algorithms to process covered data in a manner that may violate Federal anti-discrimination laws.
Enforcement
FTC, state attorneys general, and by individuals through a private right of action.
A plaintiff bringing suit would not be required to prove injury in fact (a violation alone is the injury) and could seek damages up to $1000/violation (or actual damages, if greater).
The bill would also invalidate any pre-dispute arbitration agreement that waives claims arising under this law.
FTC, or other appropriate state or federal agency.
Table 1
Federal Sectoral Legislation
In some cases, sectoral efforts have taken a more dynamic approach to addressing specific harms. For example, Senator Markey (D-MA) introduced the Algorithmic Justice and Online Platform Transparency Act, which would prohibit unlawful discrimination in automated decision-making (as opposed to general data processing, as in COPRA and SAFE DATA) and impose transparency requirements mandating review and assessment of algorithms for disparate impact on protected classes.
Importantly, the bill would explicitly extend public accommodation law to “any commercial entity that offers goods and services through the internet to the general public.” Currently, Title II and III of the Civil Rights Act of 1964 prohibit discrimination on the basis of race, color, national origin, or disability in places of “public accomodation,” such as hotels, restaurants, theaters, and similar physical spaces. The law has not been amended to extend to online commerce (and federal circuit courts are split on the issue with respect to Title III). While COPRA includes “places of public accommodation” within its scope of entities that may not conduct discriminatory data processing, it does not explicitly expand federal anti-discrimination law to online retailers and marketplaces. Markey’s bill would.
In a more recent example, the “Banning Surveillance Advertising Act,” introduced by Anna Eshoo (D-CA) this week, would flatly prohibit targeted advertising based on protected characteristics under current federal anti-discrimination law – such race, color, sex (including sexual orientation and gender expression), and disability. Unlike COPRA, the SAFE DATA Act, and the Markey bill, this legislation contains no small business exemption.
Advocates’ Goals
Most proposals have not gone as far as some civil rights advocates have proposed. For example, the Lawyers’ Committee for Civil Rights Under Law and Free Press introduced a comprehensive Model Bill in March 2019, that would not only would prohibit discriminationin economic opportunities (housing, employment, credit, insurance, or education) and in public accomodations (including any business that offers goods or services through the internet, as in the Markey bill), but also in any manner that would interfere with a person’s right to vote. Similar to COPRA, the Model Bill would also impose auditing requirements for discriminatory processing.
In the Lawyers’ Committee proposal, the law would be enforced by the FTC, the states, the DOJ Civil Rights Division, or through a private right of action. The civil penalty for violation would be heftier than other legislation, with $16,500 per violation (or up to 4% of annual revenue if punitive damages are warranted or the action is brought by the state).
Other notable provisions in the Model Bill which are not in COPRA nor the SAFE DATA Act include:
Expanded Definition of “Privacy Risk.” The expanded definition would include intangible harms such as psychological harm (anxiety, embarrassment, fear), stigmatization or reputational harm, and disruption from unwanted commercial solicitations.
Shifting Burden of Proof. Typically, a party bringing a civil suit has a duty to prove each assertion or claim. Similar to existing civil rights law, however, the Model Bill would utilize a burden-shifting framework: where if the plaintiff demonstrates disparate impact on the basis of a protected characteristic from a data processing activity, the burden would shift to the defendant to show that such processing was necessary to achieve a substantial, legitimate, and nondiscriminatory interest. If the defendant meets that burden, the burden shifts back to the plaintiff to demonstrate that an alternative policy or practice could serve such interest with a less discriminatory effect.
Affirmative Duty to Interrupt. Entities would have a duty to prevent or aid in preventing civil rights violations under the law, where any entity that makes a conscious effort to avoid actual knowledge of violation and has the ability to prevent or halt such violation shall also be liable.
Targeted Advertising. At least some forms of targeted advertising would be regulated as an unfair and deceptive practice through the FTC, taking into consideration factors like predatory or manipulative practices that harm marginalized populations, as well as methods for promoting diversity and inclusion of small businesses owned by underrepresented populations, amongst others.
We anticipate that the debate regarding the scope and substance of civil rights protections in data privacy policy is just beginning. The NTIA intends to publish a Notice and Request for Comments in the Federal Register regarding this topic, where members of the public unable to participate in the Listening Sessions are encouraged to respond.
Brain-Computer Interfaces & Data Protection: Understanding the Technology and Data Flows
This post is the first in a four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.
Click here for FPF and IBM’s full report: Privacy and the Connected Mind. Additionally, FPF-curated resources, including policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are here.
I. Introduction – What are BCIs and Where are They Used?
Today, Brain-Computer Interfaces (BCIs) are primarily used in the health-care context for purposes including rehabilitation, diagnosis, symptom management, and accessibility. While BCI technologies are not yet widely adopted in the consumer space, there is increasing interest and proliferation of new direct-to-consumer neurotechnologies from gaming to education. It is important to understand how these technologies use data to provide services to individuals and institutions, as well as how the emergence of such technologies across sectors can create privacy risks. As organizations work to build BCIs while mitigating privacy risks, it is paramount for policymakers, consumers, and other stakeholders to understand the state of the technology today and associated neurodata and its flows.
BCIs are computer-based systems that directly record, process, or analyze brain-specific neurodata and translate these data into outputs that can be used as visualizations or aggregates for interpretation and reporting purposes and/or as commands to control external interfaces, influence behaviors or modulate neural activity.
BCIs can be broadly divided into three categories: 1) those that record brain activity; 2) those that modulate brain activity; and 3) those that do both, also called bi-directional BCIs (BBCIs).
BCIs can be invasive or non-invasive and employ a number of techniques for collecting neurodata and modulating neural signals.
Neurodata is data generated by the nervous system, which consists of the electrical activities between neurons or proxies of this activity.
Personal neurodata is neurodata that is reasonably linkable to an individual.
BCIs that record brain activity are more commonly used in the healthcare, gaming, and military contexts. Modulating BCIs are typically found in the healthcare context, such as when used to treat Parkinson’s disease and other movement disorders by using deep brain stimulation. BCIs cannot at present or in the near future “read a person’s complete thoughts,” serve as an accurate lie detector, or pump information directly into the brain.
II. BCIs Can Be Invasive or Non-Invasive. Both Employ a Number of Techniques for Recording Neurodata and Modulating Neural Signals
Invasive BCIs are installed directly into—or on top of—the wearer’s brain through a surgical procedure. Today, invasive BCIs are used in the health context for a variety of purposes, such as improving patients’ motor skills. Invasive BCI implants can involve a number of different types of implants. An electrode array called a Utah Array is installed into the brain and relies on a series of small metal spikes set within a small square implant to record or modulate brain signals. Other prominent examples of invasive BCIs rely on electrocorticography (ECoG), where electrodes are attached to the brain’s exposed surface to measure the cerebral cortex’s electrical activity. ECoG is most widely used to help medical providers locate the brain area that is the center of epileptic seizures.
Unlike invasive BCIs, non-invasive BCIs do not require surgery. Instead, non-invasive BCIs rely on external electrodes and other sensors to collect and modulate neural signals. One of the most prominent examples of a non-invasive BCI technology is an electroencephalogram (EEG)—a method for recording the brain’s electrical activity, with electrodes placed on the scalp’s surface to measure neurons’ activity. EEG-based BCIs are common in gaming where collected brain signals are used to control in-game characters and select in-game items. Another noteworthy non-invasive method is near-infrared spectroscopy (fNIRS), which measures proxies of brain activity via changes in blood flow to certain regions, specifically changes in oxygenated and deoxygenated hemoglobin concentration using near-infrared light. fNIRS is especially prominent in wellness and medical BCIs, such as those used to control prosthetic limbs.
Other non-invasive techniques go beyond simply recording neurodata by also modulating the brain. For example, transcranial direct current stimulation (tDCS) and transcranial magnetic stimulation (TMS) are both used to modulate neuroactivity. Non-invasive neurotechnologies should not be equated to non-harmful technologies—just because a device is not directly implanted to sit on or within the brain does not mean that it does not pose unique health and other privacy and data use risks.
Both invasive and non-invasive BCIs are generally characterized by four components:
Signal Acquisition and Digitization: Involves sensors (e.g. EEG, fMRI, ect.) measuring neural signals. The device amplifies to levels that enable processing and sometimes filters collected signals to remove unwanted data elements, such as noise and artifacts. These signals are digitized and transferred to a computer.
Feature Extraction: As part of signal processing, applicable signals are separated from extraneous data elements, including artifacts and other undesirable elements.
Feature Translation: Signals are transformed into usable outputs.
Device Output: Translated signals can be used as visualizations for research or care, or they can be used as directed instructions, including feedforward commands utilized to operate external BCI components (e.g. external software or hardware like a robotic arm) or feedback commands which may provide afferent (conducted inward) information to the user or may directly modulate on-going neural signals.
III. Recorded Neurodata Becomes Personal Neurodata When it is Reasonably Linkable to an Individual
Neurodata is data generated by the nervous system, which consists of the electrical activities between neurons or proxies of this activity. Neurodata can be both directly recorded from the brain—in the case of BCIs—or indirectly recorded from an individual’s spinal cord, muscles, or peripheral nerves.
At times, neurodata can be personally identifiable when reasonably linkable to an individual or when combined with other identifying data associated with an individual, such as when part of a particular user profile. The recording and processing of personal neurodata can produce information related to an individual’s biology and cognitive state that is directly tied to that user’s record, use, or account. Additionally, the processing of personal neurodata can lead to inferences about an individual’s moods, intentions, and various physiological characteristics, such as arousal. Machine learning (ML) sometimes plays a role as a tool for helping determine if a neurodata pattern matches a general identifier or particular class or physiological state. Although identifying an individual based solely on their recorded personal neurodata is difficult, such identification has been shown to be possible with relatively minimal data (less than 30 seconds-worth of electrical activity) within a lab setting. Some experts believe that such identification is feasible more broadly in the near term.
Personal neurodata can reveal seemingly innocuous data; record behavioral interactive activity; include health information associated with an individual; or potentially provide insight into an individual’s feelings or intentions. BCIs may eventually progress into new arenas, recording increasingly sensitive personal neurodata, leading to intimate inferences about individuals. Those applications may seek to include transcribing a wide-range of a wearer’s thoughts into text, serving as an accurate lie detector, and even implanting information directly into the brain. However, these speculative uses are still in the early research phases and could be decades from fruition, or perhaps never emerge.
IV. Conclusion
As BCIs evolve and are more commercially available across numerous sectors, it is paramount to understand the unique risks such technologies pose. Although our report, and this blog series, primarily focus on the privacy concerns—including questions about the transparency, control, security, and accuracy of data— around the existing and emerging BCI capabilities, these technologies also raise important technical considerations and ethical implications, related to, for example fairness, justice, human rights, and personal dignity. We will highlight where additional ethical and technical concerns emerge in various use cases and applications of BCIs throughout this series.
12th Annual Privacy Papers for Policymakers Awardees Explore the Nature of Privacy Rights & Harms
The winners of the 12th annual Future of Privacy (FPF) Privacy Papers for Policymakers Award ask big questions about what should be the foundational elements of data privacy and protection and who will make key decisions about the application of privacy rights. Their scholarship will inform policy discussions around the world about privacy harms, corporate responsibilities, oversight of algorithms, and biometric data, among other topics.
“Policymakers and regulators in many countries are working to advance data protection laws, often seeking in particular to combat discrimination and unfairness,” said FPF CEO Jules Polonetsky. “FPF is proud to highlight independent researchers tackling big questions about how individuals and society relate to technology and data.”
This year’s papers also explore smartphone platforms as privacy regulators, the concept of data loyalty, and global privacy regulation. The award recognizes leading privacy scholarship that is relevant to policymakers in the U.S. Congress, at U.S. federal agencies, and among international data protection authorities. The winning papers will be presented at a virtual event on February 10, 2022.
The winners of the 2022 Privacy Papers for Policymakers Award are:
Privacy Harms, by Danielle Keats Citron, University of Virginia School of Law; and Daniel J. Solove, George Washington University Law School
This paper looks at how courts define harm in cases involving privacy violations and how the requirement of proof of harm impedes the enforcement of privacy law due to the dispersed and minor effects that most privacy violations have on individuals. However, when these minor effects are suffered at a vast scale, individuals, groups, and society can feel significant harm. This paper offers language for courts to refer to when litigating privacy cases and provides advice as to when privacy harm should be considered in a lawsuit.
In this paper, Green analyzes the use of human oversight of government algorithmic decisions. From this analysis, he concludes that humans are unable to perform the desired oversight responsibilities, and that by continuing to use human oversight as a check on these algorithms, the government legitimizes the use of these faulty algorithms without addressing the associated issues. The paper offers a more stringent approach to determining whether an algorithm should be incorporated into a certain government decision, which includes critically considering the need for the algorithm and evaluating whether people are capable of effectively overseeing the algorithm.
The Surprising Virtues of Data Loyalty, by Woodrow Hartzog, Northeastern University School of Law and Khoury College of Computer Sciences, Stanford Law School Center for Internet and Society; and Neil M. Richards, Washington University School of Law, Yale Information Society Project, Stanford Center for Internet and Society
The data loyalty responsibilities for companies that process human information are now being seriously considered in both the U.S. and Europe. This paper analyzes criticisms of data loyalty that argue that such duties are unnecessary, concluding that data loyalty represents a relational approach to data that allows us to deal substantively with the problem of platforms and human information at both systemic and individual levels. The paper argues that the concept of data loyalty has some surprising virtues, including checking power and limiting systemic abuse by data collectors.
Smartphone Platforms as Privacy Regulators, by Joris van Hoboken, Vrije Universiteit Brussels, Institute for Information Law, University of Amsterdam; and Ronan Ó Fathaigh, Institute for Information Law, University of Amsterdam
In this paper, the authors look at the role of online platforms and their impact on data privacy in today’s digital economy. The paper first distinguishes the different roles that platforms can have in protecting privacy in online ecosystems, including governing access to data, design of relevant interfaces, and policing the behavior of the platform’s users. The authors then provide an argument as to what platforms’ role should be in legal frameworks. They advocate for a compromise between direct regulation of platforms and mere self-regulation, arguing that platforms should be required to make official disclosures about their privacy-related policies and practices for their respective ecosystems.
China enacted the first codified personal information protection law in China in late 2021, the Personal Information Protection Law (PIPL). In this paper, Wang compares China’s PIPL with data protection laws in nine regions to assist overseas Internet companies and personnel who deal with personal information in better understanding the similarities and differences in data protection and compliance between each country and region.
Cameras are everywhere, and with the innovation of video analytics, there are questions being raised about how individuals should be notified that they are being recorded. This paper studied 123 individuals’ sentiments across 2,328 video analytics deployments scenarios to inform their conclusion. In their conclusion, the researchers advocate for the development of interfaces that simplify the task of managing notices and configuring controls, which would allow individuals to communicate their opt-in/opt-out preference to video analytics operators.
From the record number of nominated papers submitted this year, these six papers were selected by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. The winning papers were selected based on the research and solutions that are relevant for policymakers and regulators in the U.S. and abroad.
In addition to the winning papers, FPF has selected two papers for Honorable Mention: Verification Dilemmas and the Promise of Zero-Knowledge Proofs by Kenneth Bamberger, University of California, Berkeley – School of Law; Ran Canetti, Boston University, Department of Computer Science, Boston University, Faculty of Computing and Data Science, Boston University, Center for Reliable Information Systems and Cybersecurity; Shafi Goldwasser, University of California, Berkeley – Simons Institute for the Theory of Computing; Rebecca Wexler, University of California, Berkeley – School of Law; and Evan Zimmerman, University of California, Berkeley – School of Law; and A Taxonomy of Police Technology’s Racial Inequity Problems by Laura Moy, Georgetown University Law Center.
FPF also selected a paper for the Student Paper Award, A Fait Accompli? An Empirical Study into the Absence of Consent to Third Party Tracking in Android Apps by Konrad Kollnig and Reuben Binns, University of Oxford; Pierre Dewitte, KU Leuven; Max van Kleek, Ge Wang, Daniel Omeiza, Helena Webb, and Nigel Shadbolt, University of Oxford. The Student Paper Award Honorable Mention was awarded to Yeji Kim, University of California, Berkeley – School of Law, for her paper, Virtual Reality Data and Its Privacy Regulatory Challenges: A Call to Move Beyond Text-Based Informed Consent.
The winning authors will join FPF staff to present their work at a virtual event with policymakers from around the world, academics, and industry privacy professionals. The event will be held on February 10, 2022, from 1:00 – 3:00 PM EST. The event is free and open to the general public. To register for the event, visit https://bit.ly/3qmJdL2.
Overcoming Hurdles to Effective Data Sharing for Researchers
In 2021, challenges faced by academics in accessing corporate data sets for research and the issues that companies were experiencing to make privacy-respecting research data available broke into the news. With its long history of research data sharing, FPF saw an opportunity to bring together leaders from the corporate, research, and policy communities for a conversation to pave a way forward on this critical issue. We held a series of four engaging dinner-time conversations to listen and learn from the myriad voices invested in research data sharing. Together, we explored what it will take to create a low-friction, high-efficacy, trusted, safe, ethical, and accountable environment for research data sharing.
FPF formed an expert program committee to set the agenda for the discussion series. The committee guided our selection of topics to discuss, helped identify talented experts to present their views, and introduced FPF to new and salient stakeholders to the research data sharing conversation. The four virtual dinners were held on Thursday, November 4, November 16, December 2, and December 18. Below are significant points of discussion from each event.
The Landscape of Data Sharing
During the first dinner discussion, participants emphasized the importance of reviewing research for ethical soundness and methodological rigor. Many highlighted the challenges of performing consistent and fair ethical and methodological reviews given corporate and research stakeholders’ different expectations and capabilities. FPF has explored this dynamic in the past: both companies and researchers operate with a responsibility to the public that requires technical, ethical, and organizational work to fulfill. The ability of critical stakeholders, including consumers themselves, to articulate the clear and practical steps they take to build trusted public engagement in data sharing varies widely.
Participants offered that one of the key steps necessary to improve public and stakeholder trust in data sharing is to improve education for all parties on the topic. In particular, current efforts should be revised and expanded to more intuitively explain data collection, stewardship, hygiene, interoperability, and the differences in corporate and researchers’ data needs and expectations. Participants suggested improving consumers’ digital literacy so that consent to collecting or using personal data can be more meaningful and dynamic.
Research Ethics and Integrity for a Data Sharing Environment
During our second dinner, two topics emerged. First, participants pointed out how regulations and organizational rules limit the ability of institutions to superintend the ethical, technical, and administrative reviews called for in discussions of data sharing.
Second, the participants honed in on data de-identification and anonymization as critical components of ethical and technical review of proposed data uses for research. While variations in the interpretation of research ethics regulations and norms by Institutional Review Boards (IRBs) lead to an inconsistent and shifting landscape for researchers and companies, the expert panelists pointed out that the variation between IRBs is not as significant as the variation between regulatory controls for research governed by federal restrictions (the Common Rule) and those applied to commercial research under consumer protection laws.
Several participants advocated for a comprehensive U.S. federal data privacy law to equalize institutional variations, eliminate gaps between consumer data protection and research data protections, and clarify protections for research uses of commercial data. Efforts to close such regulatory gaps would require educating all stakeholders, including legislators, researchers, data scientists, and companies’ data protection officers, about the relative differences between risks around research data and risks associated with commercial use or breach of consumer data.
While participants recommended comprehensive privacy legislation as an ideal, serious consideration was paid to the role that specific agency rule-making efforts could play in this space. One of the topics for rulemaking was the concept of data anonymization. Participants considered how to achieve agreement on the ethical imperative for data anonymization. They identified some important steps toward anonymization, such as developing a more agreeable definition of “anonymous” that could be implemented by the many different parties involved in the research data sharing process and providing essential technical support to achieve the expected standards of data anonymization.
The Challenges of Sharing Data from the Perspective of Corporations
During our third dinner, the discussion focused on assessing researchers’ fitness to access an organization’s data. We also discussed evaluating research projects in light of public interest expectations. There was widespread agreement that data sharing is vital for various reasons, such as promoting the next generation of scientific breakthroughs and holding companies publicly accountable. On the other hand, there was disagreement on ensuring that data is available for research and that individuals’ privacy is continuously protected.
Some asserted that privacy was being used as an argument by companies to protect their interests and that it is not as tricky a standard to achieve as is described. Others disagreed with this assessment, saying that they always assumed the worst when it came to the efficacy of privacy protections.
There are also technical and social barriers to democratizing access to corporate data for research. Participants pointed out that technical barriers can be low bars, like file size and type, or high barriers, such as overcoming data fragmentation, including personnel expertise when reviewing projects, building and maintaining shareable data, and managing sector-specific privacy legislation that governs what companies must do to achieve existing data privacy requirements.
Social barriers were discussed as high bars, like limiting access to researchers affiliated with the “right” institutions. Participants discussed how to sufficiently democratize know-how to expand corporate data-sharing and build and maintain the trusted network relationships critical for facilitating data sharing across various parts of the researcher-company environment. Consent reemerged as both a technical and social barrier to data sharing. In particular, participants addressed the problem of securing consumers’ meaningful consent for the use of data in unforeseen but beneficial research use cases that may arise far in the future.
Legislation, Regulation, and Standardization in Data Sharing
During the final dinner conversation, participants tackled the challenging issues of legislation, regulation, and standardization in the research data sharing environment. There was broad agreement that there should be standards for data sharing to make the process more accessible and data more usable. Most participants agreed that data should be FAIR and harmonious. Still, there was disagreement over what field or institution is a good model for this (economics, astronomy, and the US Census were discussed as possibilities).
There was agreement that researchers should meet a certain standard to be given access, but this must be done carefully to avoid creating tiers of first and second-class researchers. The discussion highlighted the importance of having shared standards, vocabulary, terminology, and expectations about the amount of data and supporting material to be transferred.
Interoperability of terms, ontologies, and expectations was another concern flagged throughout the dinner; merely having data available to researchers does not guarantee that they can use it. There was disagreement about what kind of role the National Institutes of Standards and Technology (NIST), the Federal Trade Commission (FTC), and the National Science Foundation (NSF), or researchers’ professional institutions should play or if all of them should play a role in enforcing these standards.
Having access to the code used to process data represents another barrier to research. It isn’t easy to replicate experiments and make discoveries without interoperability and code sharing. There was agreement that an unethical side of data use could complicate any efforts to create positive benefits. Those challenges include zombie data, predatory publication outlets, rogue analysts, and restricting access to research that may have national security implications.
Some Topics Came Up Repeatedly
Persistent topics of discussion throughout the dinners that should be addressed through future legislative or regulatory efforts included: ensuring data quality, data storage requirements (i.e., whether data resides with the firm or with a third party), the incentive structure for academics to share their data with other scholars and with companies, and the emerging role for synthetic data as a method for sharing valuable data representation without transferring the customers’ actual specific and sensitive data.
The series also tackled challenging privacy questions in general, such as: are there special considerations for sharing the data of children or teens (or other vulnerable or protected classes)? Is there a role for funders and publishers to more strongly require documentation for verifying accountability around the use of shared data? Is there a need for involvement by the Office of Research Integrity (ORI) and research misconduct investigators in the supervision of research data sharing?
Next steps toward Responsible Research Data Sharing
In the coming weeks and months, FPF will work with participants in the dinner series to consolidate the knowledge shared during the salon series into a “Playbook for Responsible Data Sharing for Research.” Developed for corporate data protection officers and their counterparts in research institutions, this playbook will cover:
the contracting, capacity-stabilization, and accountability-assurances that should govern research projects using shared data;
managing review of ethics and research project design while respecting research independence review the design of research projects using shared data;
the challenges that researchers must surmount to access and use shared data resources;
the need for effective communication of the findings from such research projects.
We look forward to sharing the “Playbook for Responsible Data Sharing for Research” with the FPF community and our many new friends and partners from the research community in the early months of 2022. Follow FPF on LinkedIn and Twitter, and subscribe to email to receive notification of its release.