Today, the Future of Privacy Forum (FPF) released a collection of new publications and resources to help governments, educators, researchers, companies, and other organizations navigate essential privacy questionsregarding the response to the coronavirus pandemic. Global leaders responding to the coronavirus pandemicare increasingly relying on data from individuals and communities to analyze the virus’ progression, deploy resources, and make policy decisions.
“We want to help organizations make data available for leaders, researchers, and the publicwithout opening the door to lasting or limitless surveillance,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “The information we have compiled will help decision makersthink clearly about – and document – what personal information they will collect or disclose, to whom, and under what conditions.”
COVID-19: Privacy & Data Protection Resourcesconsolidatesprivacy resources from sources around the world, highlighting resources that are useful to organizations grappling with questions about pandemic-related data. The site will be updated on a regular basis with new content.
A Closer Look at Location Data: Privacy and Pandemics.Public health agencies and epidemiologists are analyzing device location data to track the COVID-19 pandemic. Reporters and researchers considering the implications of usinglocation data to respond to the pandemic will find this blog post valuable.
Student Privacy During the COVID-19 Pandemic. K-12 and higher education administrators and educators will appreciate this joint School Superintendents Association (AASA)-FPF white paper’s guidance on how the Family Educational Rights and Privacy Act (FERPA) applies to schools in the context of COVID-19. FERPA and the Health Insurance Portability and Accountability Act (HIPAA) govern the disclosure of students’ health information held by schools; both laws permitemergency disclosures to protect the health or safety of others in some circumstances.
Privacy and Pandemics: A Thoughtful Discussion.On March 26, 2020, FPF brought together a dozen ethicists, academics, government officials, and corporate privacy and “data for good” leaders for a virtual workshop with more than 100 attendees to discuss data sharing in times of crisis and effective privacy and civil liberties measures. It’s the first in a series of events about privacy and pandemics that will be used to develop best practices and policy recommendations for decision makers.
Additional U.S. and international privacy resources that cover civil liberties and ethical best practices, security, technical tools, and emerging solutions.
As the COVID-19 virus spreads, governments, researchers, and healthcare institutions are seeking to obtain and deploy consumer data to track the spread of the virus, deliver emergency supplies, target travel restrictions and quarantines, and develop vaccines and cures. But can data collected from phones, credit cards, and other sources be used in this emergency without opening the door to lasting or limitless surveillance?
Yesterday, FPF convened a Virtual Workshop with a dozen ethicists, academics, government officials, and corporate leaders, and over 100 corporate attendees, to discuss responsible data sharing in times of crisis. It’s the first in a series of events about privacy and pandemics that FPF will use to develop best practices and policy recommendations for decision makers.
Participants discussed how recent “data for good” initiatives have informed data sharing during the crisis, concerns about data sharing in a time of low trust, lessons learned from past pandemics, how to effectively protect privacy and civil liberties, and what the COVID-19 pandemic means for the future of data sharing between companies, academics, and governments.
A more detailed workshop report is forthcoming, but in the interest of urgency we share the most important advice that arose in the Workshop for companies with data that could be of value to public health:
Understand how your own data sets relate to the needs of health experts. Any data set should be just one input into a broader epidemiological model. Some sets are not large enough, accurate enough, or relevant enough to be useful. Several participants warned that sharing flawed data or treating one data set as a “silver bullet” can lead decision-makers astray. Instead, companies should be sure to understand both the best ways that their data can be used and the risks associated with sharing their specific data. It is essential to work with medical and public health partners to understand their data needs, rather than merely provide analysis based on data collected for commercial uses.
Continue to follow your guidelines for data protection during the crisis, and recognize that your standards for sharing have not changed. Participants agreed that data protection principles should not be abandoned because there is a crisis, but pointed out that the standards for prioritizing review of projects have changed because of pandemic-driven urgency. Many companies regularly face smaller scale emergency requests for data.Some companies have established expedited processes to quickly elevate exigent data-sharing decisions to the highest levels.
Establish clear boundaries. History tells us that it is difficult to discontinue practices started in an emergency. In the absence of clear systemic rules, organizations should establish an exit strategy up front to protect against continued “emergency” practices after the crisis. Companies must be clear that data shared now should not be kept forever or used for other purposes; clear rules help maintain and build public trust in their programs.
Use data protection safeguards, such as anonymization and aggregating data. These are established techniques, but there is no standard definition about what they mean, and much skepticism about the ability to guarantee anonymization.Companies should explain how they use techniques in specific situations. While data is being shared during this emergency, organizations must continue to follow principles such as data minimization, proportionality and destroying data after it is transferred or used.
Work with a partner that has controls in place. Companies with established data for good programs have been working with partners to ensure data sets are appropriate, anonymized, and aggregated as much as possible. Participants expressed that working through existing arrangements is preferable to developing new partnerships in the midst of a crisis. For example, many university research groups have already data sharing agreements in place which have been vetted by Institutional Review Boards. These groups could act as a trusted partner between companies and public agencies.
Be transparent. To maintain public trust, companies need to clearly explain what data is being shared, with whom, and for what purpose.
The Workshop’s participants agreed that it would be better if more companies, non-profits, governments, and academics had been working collaboratively on the technical infrastructure, governance structures, and legal frameworks for data sharing in an emergency before the COVID-19 pandemic hit.
Some participants recommended ways to strengthen the “data for good” ecosystem over time, including standing up new trust structures. One participant recommended strengthening the “data enablers” in the system, such as institutional or ethical review boards, which can serve as checks on ill-advised data sharing and also facilitate connecting data sources – often, companies that have data with socially beneficial uses – with data users, like researchers and policymakers.
Participants also agreed that data protection and humanitarian action are completely compatible. While the trade-offs for decisions about sharing data have changed, there still should be a thoughtful and legally justified process for considering what data to share, with whom, for what purposes, and how it should be protected.
Many more insights and details were gathered and will inform FPF’s ongoing work with stakeholders to identify best practices and policy recommendations for decision makers.
A Closer Look at Location Data: Privacy and Pandemics
In this series, Privacy and Pandemics, the Future of Privacy Forum explores the challenges posed by the COVID-19 crisis to existing ethical, privacy, and data protection frameworks, and will seek to provide information and guidance to companies and researchers interested in responsible data sharing to support public health response. Future posts will examine pandemic-tracking mobile apps, regulatory guidance across the world, and more.
Part 1: A Closer Look at Location Data
Principal author: Stacey Gray (Senior Counsel) ([email protected]). Contributors: Chelsey Colbert (Policy Counsel, Mobility and Location); Polly Sanderson (Policy Counsel, Legislative Analysis); Katelyn Ringrose (Policy Fellow); Dr. Sara Jordan (Policy Counsel, Artificial Intelligence and Ethics). Email us at [email protected].
In light of COVID-19, there is heightened global interest in harnessing location data held by major tech companies to track individuals affected by the virus, better understand the effectiveness of social distancing, or send alerts to individuals who might be affected based on their previous proximity to known cases. Governments around the world are considering whether and how to use mobile location data to help contain the virus: Israel’s government passed emergency regulations to address the crisis using cell phone location data; the European Commission requested that mobile carriers provide anonymized and aggregate mobile location data; and South Korea has created a publicly available map of location data from individuals who have tested positive.
Public health agencies and epidemiologists have long been interested in analyzing device location data to track diseases. In general, the movement of devices effectively mirrors movement of people (with some exceptions discussed below). However, its use comes with a range of ethical and privacy concerns.
In order to help policymakers address these concerns, we provide below a brief explainer guide of the basics: (1) what is location data, (2) who holds it, and (3) how is it collected? Finally we discuss some preliminary ethical and privacy considerations for processing location data. Researchers and agencies should consider: how and in what context location data was collected; the fact and reasoning behind location data being classified as legally “sensitive” in most jurisdictions; challenges to effective “anonymization”; representativeness of the location dataset (taking into account potential bias and lack of inclusion of low-income and elderly subpopulations who do not own phones); and the unique importance of purpose limitation, or not re-using location data for other civil or law enforcement purposes after the pandemic is over.
What is precise location data?
Precise location data, or “mobility data,” involves information about how devices and people move through spaces over time. Most of this information comes from the devices we carry with us, with smartphones acting as proxies for people (according to Pew, smartphone ownership in 2019 was near-universal at 81% of Americans).
Why is this the case? Even the most basic connectivity, or the ability to send and receive wireless content on devices, has to involve information about where those devices are located. For example, providers of wireless services know where devices are located because they provide the service through local cell towers and networks. At a more general level, an IP address (an identifier that is freely and openly shared by devices to send and receive Internet traffic) is often sufficient to know a person’s city and state.
However, most researchers analyzing COVID-19 are interested in highly “precise” information about where devices (and therefore people) are located over time. The fact that an individual is located in “Washington, DC” is not sufficient for tracking an infectious disease, but information such as “works in the same building” or “attended the same restaurant at the same time as a diagnosed person” (precise location) can be very useful. Typically, we think of location data as having privacy implications when it is precise enough to single out an individual with reasonable specificity. This is often GPS-level specificity, and would usually not include information like an IP address. Measuring precise location depends in part on context, such as population density (for example, in a rural or remote area, a lower level of specificity might be more able to identify a person than if that same person were standing in Times Square). Recent legislative proposals have attempted to create strict cut-offs (such as an 1,640 foot radius under the U.S. House and Commerce Discussion Draft, or an 1,850 foot radius under the California Privacy Rights Act ballot initiative of 2020).
Sometimes mobility or location data is tied to known individuals (such as a name associated with a cell phone subscription), and at other times it is tied to a unique identifier associated with a device. In these cases, individualized data is often referred to as “anonymized.” In other cases, if a dataset has been modified to show movements of groups of people (and not individuals), it is often referred to as “aggregated.”
Who has access to location data?
Location data is held by a variety of commercial entities that provide different services, including as part of the core functionality of a device (mobile phone carriers and operating systems), as part of a consumer-facing feature (mobile apps), or as part of tracking in physical spaces that relies on device connectivity (Internet of Things):
Mobile phone carriers. Cell phone carriers know where phones are located because they direct calls to phones through local cell towers, which may be enhanced with GPS location data.
Operating Systems. Providers of mobile operating systems — Android (Google) and iOS (Apple) — may know where devices are located as a result of providing services, improving functionality, or enabling opt-in location features. In addition, some users may have opted in to the use of cell tower and Wi-Fi data being used to improve location services.
Left – iOS (Apple), middle and right – Android (Google)
Apps and App Partners. Many people have installed apps with location-based features, such as weather alerts, ridesharing, or groceries deliveries. In many cases, this location data is shared with partners in order to provide personalized ads or to monetize the free app. Many apps use Software Development Kits (SDKs), or code developed by third parties. Frequently, location data is shared with these SDK providers to improve their service or in exchange for monetization or other services.
Location Analytics Providers (Internet of Things). Connected devices emit identifying information that allows them to be tracked, even when they are not actively connected to a network. This includes mobile phones (when Wi-Fi or Bluetooth are turned on), but also other Internet of Things (IoT) devices such as fitness trackers, smart toys, or vehicles. As a result, many airports, stadiums, and brick-and-mortar stores analyse this signal data to better understand when their busiest hours are, where the highest in-store foot-traffic is, what products customers show an interest in, or how long people are waiting in lines.
How is location data collected?
When most people think of location data, they think of GPS (Global Positioning System). In fact, GPS is only one of many ways to infer where devices are located, most used in some combination by carriers, OS’s, apps, and others. Commonly used methods include: GPS; Cell Towers; Wi-Fi Networks; and Beacons (among others). Each provides a different level of precision and can be used for different purposes:
GPS. Smartphones and other devices can detect location via satellite GPS independently of any telephone or internet reception, although a phone’s GPS chip it is only one sensor among many. The accuracy of GPS signals varies widely, and can be affected by weather, or physical interference. For example, it is much less accurate in urban areas, and especially poor for detecting specific locations inside large buildings. As a result, modern cell phones use GPS in combination with other forms of location signal (Wi-Fi, Bluetooth) at various times to create a more accurate location determination.
Cell Towers. Cell towers have a main function, which is to be used by carriers to provide cell service. As a result, mobile carriers (such as AT&T, Sprint, Verizon, T-Mobile, and many others in the United States) know approximately where devices are located because they know which cell towers the devices connect with. In addition to this core function, cell towers also emit unique “Cell Tower IDs,” which can be freely detected. There are many private and public databases of the Cell Tower IDs associated with mapped locations of known cell towers. As a result, the proximity of nearby cell towers (and the signal strength of their IDs) can be used to infer where a device is located. Find your local cell towers here (OpenCellID).
Wi-Fi Networks. Mobile devices can infer their location by scanning for nearby Wi-Fi networks. Nearby networks or “access points” might include, for example, neighbors’ Wi-Fi, or the Wi-Fi available in cafes and shops. Large databases exist of the unique identifiers (MAC addresses and SSID) of wireless routers and their known locations, with companies such as Mozilla and Combain reporting databases of millions of unique Wi-Fi networks. Despite the relatively public nature of these identifiers, most (but not all) commercial databases offer an Opt Out mechanism for users who prefer that their own network not be included. In 2011, Google created an approach for opting-out a particular access point from being included in its database, which involves appending the phrase “_nomap” to the end of the wireless router’s SSID. Mozilla similarly honors the _nomap method, but other databases do not, or offer their own opt-outs.
Bluetooth Beacons. Many apps are designed to detect their proximity to “beacons,” small radio transmitters that broadcast one-way Bluetooth signals. Beacons are inexpensive and can be attached to personal items (such as a person’s keys or wallet).They can also be installed at known locations, for example in a retail space or in front of a special display of products in a shop. In these cases, an app that a user has given permission to access Bluetooth can infer the device’s location or send proximity-based alerts or other content.
Combining Signals for Precision. Modern smartphones combine signals detected from the sources above to create a more precise location than any one signal (such as GPS) would provide alone. For example, iOS and Android harness the signals from many different sensors on the device, such as altimeter and accelerometer sensors, to provide consolidated a “Location Services” feature that offers highly precise location information to apps (with a user’s permission) and that users can control in Settings. Signals can also be combined to create predictive location services, for example to predict a future traffic jam, or show users upcoming attractions on their predicted path.
Ethical and Privacy Considerations for Location Data
Lawmakers are beginning to navigate whether and how to make use of the many sources of commercial location data. As they do so, we recommend that they consider: how and in what context location data was collected (described above), as well as: the fact and reasoning behind location data being classified as legally “sensitive” in most jurisdictions; challenges to effective “anonymization”; representativeness of the location dataset (taking into account potential bias and lack of inclusion of low-income and elderly subpopulations who do not own phones); and the unique importance of purpose limitation, or not re-using location data for other civil or law enforcement purposes after the pandemic is over.
Precise location data is legally sensitive. In most jurisdictions, location data is treated as a special category of data subject to greater protections, such as heightened security standards, and the requirement of affirmative express consent. For example, the longstanding approach of the US Federal Trade Commission (FTC) has been to require affirmative consent for location data. In 2016 the FTC settled with ad platform InMobi for failing to respect users’ choice not to agree to share location data with apps. Affirmative express consent is also a feature of most US legislative proposals from 2018-2020, such as the proposed California Privacy Rights Actof 2020; and U.S. Senator Cantwell’s proposed Consumer Online Privacy Rights Act. The U.S. Supreme Court has also held that location data carries unique sensitivities because of its ability to reveal highly sensitive data about people’s behaviors, patterns, and personal life, most recently in Carpenter v. United States (requiring law enforcement to obtain a warrant for cell site location data). In the EU, access to location data is normally regulated as a matter of confidentiality of telecommunications, by the strict provisions of the ePrivacy Directive which require individual consent (with very narrow exceptions).
Precise location data is very challenging to fully “anonymize.” Many government entities are interested in gaining access to “anonymous” or “anonymous and aggregated” location data, to observe population-level trends and movements. While in some cases this is possible, it is very challenging to make any dataset of individual precise location data truly “anonymous.” Even if unique identifiers are used instead of names, most people’s behavior can be easily traced back to them — for example, from the location of their home (where the device “dwells” at night). These challenges are not insurmountable, but policymakers should be very careful not to overpromise, and should treat location datasets as private, sensitive information. This means it should be subject to administrative, technical, and legal controls to ensure it remains protected and limited in who can access it and for what purposes.
Even fully “aggregate” location data can sometimes be revealing. At times, even highly aggregated data about patterns of large groups of people (such as high-level heat maps) can inadvertently reveal information. In 2017, an interactive “Global Heat Map” of movements of users of the Strava fitness app inadvertently revealed the locations of deployed military personnel at classified locations. This incident highlights some of the wider ethical issues associated with open data and default public data sharing. In FPF’s privacy assessmentof the City of Seattle, we recommended that companies thoroughly analyze all risks, not only risks to privacy and re-identification, but also to “group privacy,” and impact on other values such as data quality, fairness, equity, and public trust.
Representativeness and bias are uniquely important for location datasets. Unfair data processing practices involving geolocation fall disproportionately on marginalized and vulnerable communities. As such, heightened privacy protections are especially critical for these groups. Voluntary apps, for example, are more likely to capture affluent communities. For example, a mobile app ‘Street Bump’ was released by a municipal authority in an attempt to crowdsource data to work out which roads it needed to repair. However, affluent citizens downloaded the app more than people in poorer neighborhoods. As such, the system reported a disproportionate number of potholes in wealthier neighborhoods, and could have led the city to distribute or prioritize its repair services inequitably. In contrast, mobile phone carrier data may be more representative, but may miss more of the elderly, very young, or lowest income people who may not own cellphones.
Purpose limitation is uniquely important in a crisis. Purpose limitation is a core guiding light of the US-based Fair Information Practice Principles (FIPPs) and the EU’s General Data Protection Regulation (GDPR). Because location data is sensitive and challenging to truly “de-identify” (i.e. to significantly reduce or eliminate all privacy risks), there is a serious concern that once collected by a public health agency for pandemic tracking, it could be retained or used for other purposes. Governments should consider how the location data was collected in the first instance (with users’ knowledge or consent?), and if the decision is made to repurpose it for pandemic tracking, it should be clearly siloed for that purpose and not re-used or retained for other civil or law enforcement uses. Researchers or agencies should have clear policies and procedures in place that describe the operational and technical aspects of data management.
Conclusion
As COVID-19 continues to spread, we are facing global challenges to existing norms and best practices for data collection and use. In some cases, location and mobility data might provide one path to better understanding and combatting the pandemic. Governments and researchers seeking to address concerns and risks should ask: how and in what context the location data was collected; whether it is necessary and appropriate to achieving their goals (including whether the data is truly representative of the overall population and takes into account vulnerable populations such as the elderly); whether those goals can be achieved through less invasive means; and how that data will be used, safely stored, retained, or re-purposed following the conclusion of the pandemic.
Image Attribution: “My New York heat map” by matteoc is licensed under CC BY-NC-SA 2.0.
Additional Resources:
AAAS Science Article on How Aggregated Mobility Data could Help Fight COVID-19 (Mar. 23, 2020) (arguing that mobile location data is useful to battling the pandemic, but advocating against the use of individual-level data);
Nature Article on The Privacy Bounds of Human Mobility (Mar. 25, 2020) (revealing the lack of anonymity associated with even coarse of aggregated mobile location datasets);
FPF’s blog post on the FTC Settlement with InMobi (June, 2016) (outlining the InMobi settlement for misrepresenting the fact that they were collecting location data via Wi-Fi networks);
FPF’s City of Seattle Open Data Risk Assessment (Jan. 30, 2018) (providing tools and guidance to the City of Seattle and other municipalities navigating the complex policy, operational, technical, organizational, and ethical standards that support privacy-protective open data programs.
SharedStreets’ Post on Using Location Data for Guiding Micromobility Outcomes (Mar. 26, 2019);
European Data Protection Board, 1/2020 Guidelines on Processing Personal Data (Feb. 2020) (providing guidance in the context of connected vehicles and mobility related applications);
“There’s no question that schools and institutions are struggling to manage this unprecedented situation and need as much support and information as possible to do their jobs,” said Amelia Vance, FPF’s Director of Youth and Education Privacy. “The Future of Privacy Forum is tracking the situation closely in an effort to anticipate and help address the challenges that schools may encounter as they work to navigate the COVID-19 pandemic, and we expect to release additional resources in the days ahead.”
“As our nation’s public school superintendents navigate through the extraordinary set of circumstance they face in light of COVID-19, AASA remains committed to gathering, creating, and disseminating as many resources as possible to answer, to the best of our ability, the myriad questions they raise,” said Noelle Ellerson Ng, AASA’s Associate Executive Director for Advocacy & Governance. “Through our work with FPF, we are happy to provide this collection of frequently asked questions in the context of student data and privacy and FERPA. Protecting student data and privacy is just one of the many factors they need to consider, and we are pleased to have the opportunity to share this resource today.”
The white paper offers insight into how the health or safety emergency exception under the Family Educational Rights and Privacy Act (FERPA) allows schools to share students’ personally identifiable information (PII) with the community and relevant officials during the COVID-19 pandemic.
According to FPF and AASA, under the FERPA health or safety emergency exception, “if a school determines that there is an articulable and significant threat to the health or safety of a student or other individuals and that someone needs PII from education records to protect the student’s or other individuals’ health or safety, it may disclose that information to the people who need to know it without first gaining the student’s or parent’s consent.” Read more.
The white paper also addresses a number of frequently asked questions, including:
If a student has COVID-19, what information from education records can the school share with the community?
If the school suspects that a student has COVID-19, what information can the school share with its community?
If a school suspects that a student may have COVID-19, can school officials contact the student’s primary care physician?
If a student has COVID-19 and the school’s health records are covered by HIPAA rather than FERPA, what information may the school disclose to its community?
What if the school receives a voluntary request from a local, state, or federal agency for student records to assist the agency in responding to the COVID-19 outbreak?
What should a school do if it receives a request under a mandatory reporting law to share student health records with a public health agency?
Do interagency agreements with other state or local agencies allow schools to disclose education records without obtaining consent?
To read the white paper, click here. To learn more about the Future of Privacy Forum’s student privacy work, click here.
“What does it mean for people actually working with data, people monitoring data, people analyzing data, people selling data? What does it mean to actually be thinking about privacy as a human right?” Future of Privacy Forum CEO Jules Polonetsky posed these questions at the beginning of his keynote, Navigating Privacy in a Data-Driven World: Treating Privacy as a Human Right, at RSA Conference2020 in San Francisco on February 26.
During the keynote session, Polonetsky discussed the limitations of consumer protection laws in protecting individuals’ privacy and explored how to best safeguard data to protect human rights. He expressed the importance of instituting laws that support de-identification and pseudonymization, and of independent academics being able to access data for the benefit of research and society.
“Are corporations having too much power over individuals because of how much data they have? Are foreign countries interfering in our elections? Are automated decisions being made where I’ll be turned down for healthcare, I’ll be turned down for insurance, my probation will be extended?” asked Polonetsky. “These are not privacy issues, right? These are issues of power. These are issues of human rights at the end of the day.”
EU DPAs Issue Green and Red Lights for Processing Health Data During the COVID-19 Epidemic
As Europe is grappling with an exponential increase in COVID-19 cases, some European Data Protection Authorities issued public interest guidance on the limits of collecting, sharing and using personal data relating to health in these exceptional circumstances. Particular areas of concern are related to the breadth of measures that employers can legally take to monitor the health of their employees, as well as the collection of health data by government agencies. Overall, regulators highlight that data protection law is by no means a barrier to public health, but advise organizations against “systematic and generalized” monitoring and collection of data related to health of their employees outside official requests and measures of public health authorities.
Background: the GDPR refers to “monitoring epidemics and their spread”
The GDPR individualizes several avenues to process personal data when the vital interests of individuals are concerned or for important grounds of public interest. Recital 46 specifically refers to the lawfulness of some types of processing that serve these two goals, “including for monitoring epidemics and their spread”. There are provisions in both Article 6 GDPR (the general lawful grounds for processing personal data), and Article 9 (the prohibition to process sensitive data and the exceptional circumstances in which they can be processed) that allow for collection, use and necessary sharing of personal data related to health in the context of an epidemic.
For example, “reasons of public interest in the area of public health, such as protecting against serious cross-border threats to health” are specifically mentioned as a permissible use of sensitive data, including data related to health, under Article 9(2)(i) GDPR, if provided by Union or Member State law. At the same time, Recital 52 specifically refers to derogations from the prohibition on processing sensitive data justified for “monitoring and alert purposes” and “the prevention or control of communicable diseases and other serious threats to health”.
Garante: Systematic and generalized collection of health data by employers, discouraged
Italy has been by far the most impacted country by the COVID-19 epidemic in Europe to date, with the government taking yesterday the unprecedented measure of closing the entire country. The Italian data protection supervisory authority – the Garante, highlighted in early guidance issued last week that public health authorities are the organizations mandated to collect and manage data about health related to the virus’ spread.
“Preventing the spread of Coronavirus is an objective to be pursued by entities that are tasked with discharging this mission in a professional manner. The investigation into and collection of information on the symptoms typical of Coronavirus and on the recent movements of each individual are the responsibility of healthcare professionals and the civil protection system, which are the entities tasked with ensuring compliance with the public health rules that were recently adopted”, wrote the Garante.
Therefore, the key recommendation made by the Italian DPA was for employers to “refrain from collecting, in advance and in a systematic and generalised manner, including through specific requests to individual workers or unauthorized investigations, information on the presence of any signs of influenza in the worker and his or her closest contacts, or anyhow regarding areas outside of the work environment”. The Garante recalled that employees are under an obligation to inform their employer of any danger to health and safety at the workplace and encouraged employers to set up specific channels of communication related to this type of information.
The Garante called on all controllers to “comply strictly with the instructions provided by the Ministry of Health and the competent institutions to prevent the spread of the Coronavirus without undertaking autonomous initiatives aimed at the collection of data also on the health of users and workers”.
The Italian Government published yesterday evening a Decree in the Official Journal (No 14/2020) to create a special legal framework for collecting and sharing personal data related to health by public health authorities and by private companies that are part of the national health system for the duration of the state of emergency related to COVID-19.
Irish DPC: Requesting information about recent travel and symptoms of employees and visitors, potentially justified
The Irish Data Protection Commissioner clarified from the outset in her guidance that “data protection law does not stand in the way of the provision of healthcare and the management of public health issues”. But at the same time “there are important considerations which should be taken into account when handling personal data in these contexts, particularly health and other sensitive data”. Not only that the processing needs to be necessary and proportionate, but it also “needs to be informed by the guidance and/or directions of public health authorities, or other relevant authorities.”
The DPC highlights particularly relevant aspects for compliance, such as transparency about the measures taken, in house confidentiality in handling information about possible infestations with COVID-19 of specific employees, ensuring appropriate data security, processing the minimum amount of personal data to achieve the purpose of implementing measures to prevent or contain the spread of the virus, as well as keeping track of all decisions made with regard to collection of such data and safeguards implemented, as part of accountability obligations.
The Guidance also has a Q&A section that addresses specific scenarios brought up by organizations in their communication with the DPC. For example, can an employer require all staff and visitors to the building to fill out a questionnaire requesting information on their recent travel history concerning countries affected by the virus, and medical information such as temperature? Considering that under Irish law employers also have a legal obligation to protect the health of their employees and maintain a safe place of work, on top of the justifications specific to data protection law mentioned above, “employers would be justified in asking employees and visitors to inform them if they have visited an affected area and/or are experiencing symptoms”. If such information would be gathered via questionnaires, this would need to have a justification based on necessity and proportionality, taking into account any directions and guidance of public health authorities.
The CNIL: Collection of medical files or questionnaires from all employees, likely not justified
The French supervisory authority – the CNIL, reminded organizations that personal data related to health enjoys stronger protections under the GDPR due to its sensitivity. The brief guidance issued on March 6 focused on what employers can do and what they cannot do in relation to data about the health status of their employees. As a rule, any exceptional processing of personal data caused by the epidemic should not go beyond what is necessary for the management of suspected exposure to the virus, especially considering that the Code of Public Health is also applicable to this situation.
On the blacklist of processing activities, the CNIL specifies that “employers must refrain from collecting in a systematic and generalized manner, or through individual inquiries and requests, information relating to the search for possible symptoms presented by an employee and their relatives”. The CNIL gives examples of unlawful processing:
Mandatory measurement of temperature of each employee and visitor to be sent daily to their hierarchy;
The collection of medical files or questionnaires from all employees.
Admittedly, the questionnaires in this recommendation may refer strictly to the collection of information related to employee’s overall health and not to their recent travel, but this would need to be further clarified.
The CNIL also offers examples of actions that employers can implement lawfully:
Training employees to individually disclose information related to possible exposure to the virus to the employer or to the competent health authorities;
Setting up dedicated channels to receive this type of information;
Promoting work from home solutions;
In the event of a report about possible exposure, “the employer can record the date and identity of the person suspected of having been exposed; the organizational measures taken (confinement, teleworking, orientation and contact with the occupational doctor, etc.).”
Health authorities can collect data related to health in the context of COVID-19’s spread, given that “the assessment and collection of information relating to the symptoms of coronavirus and information on the recent movements of certain persons is the responsibility of these public authorities”.
Importantly, the CNIL also mentions that employees have an obligation under the Labor Code by all means possible to preserve the health and safety of others and of themselves, which means that “they must inform their employer in the event of suspected contact with the virus”.
The Future of Privacy Technology
Today, FPF is making available a report co-authored by CEO Jules Polonetsky and Policy Fellow Jeremy Greenberg that identifies future directions and requirements of privacy technology from the industry perspective.
With support by the National Science Foundation under Grant No. 1939288, a survey was designed and administered to industry privacy leaders to provide input on the future of privacy technology as relates to their business and policy needs and objectives. In the report, the authors outline three major areas that are especially ripe for investment and development:
Private sector demand and readiness to collaborate is a factor in the launch and groundswell of activities associated with the Privacy Tech Alliance, an initiative established by FPF to advance privacy enhancing technology in commercial, government and not-for-profit sectors.
A Closer Look at Genetic Data Privacy and Nondiscrimination in 2020
Florida lawmakers recently introduced HB 1189/SB 1564 – a bill that would prohibit life and long-term care insurers from basing coverage and rates or denying coverage based on individuals’ genetic information. Washington State lawmakers are considering a bill, HB 2485, that would prohibit life insurance companies and others from obtaining individuals’ genetic information from direct-to-consumer (DTC) genetic testing services. These are two of many legislative efforts that would fill critical gaps in protections provided by the Genetic Information Nondiscrimination Act (GINA). GINA, a 2008 federal law, bans discrimination based on genetic information in health insurance and employment settings. However, GINA is limited; it does not protect individuals from genetic discrimination by life, long-term care, and disability insurers. Also, GINA does not apply to members of the US military, who recently received a Pentagon warning memo stating that DTC genetic testing results could impact decisions made about US military readiness.
Some states have stepped in to provide additional protections for individuals. California, for example, provides the most comprehensive protections beyond GINA. CalGINA extends nondiscrimination protections to circumstances related to housing, certain businesses, state-funded agencies, and in the provision of emergency services.
During recent years, GINA and complementary state laws have seemingly mitigated some concerns about genetic discrimination and provided employers and insurers with clarity regarding their legal obligations. Between 2013 and 2018, less than three cases were brought each year by the Equal Opportunity Employment Coalition (EEOC), the federal agency that enforces Title II of GINA, and a shrinking number of charges citing GINA violations were filed with the EEOC (333 charges filed in 2013 versus 209 cases filed in 2019).
However, GINA has not eliminated concerns about genetic data privacy and discrimination across the board. Many stakeholders, including legislators, have engaged in efforts to promote access to genetic information for various purposes. Therefore, it is important that policymakers, companies, employees, consumers, and other stakeholders understand GINA’s limits, the privacy risks involved in using genetic information, and the privacy-centric tools and guidance that can mitigate risks in this under-explored territory.
Employer-Sponsored or Corporate Wellness Programs
In 2017, the US witnessed controversy over H.R.1313 (the “Preserving Employee Wellness Programs Act”). Although the bill did not pass Congress, many viewed the proposal as an attempt to give employers access to employees’ genetic information through corporate wellness programs, a possibility that raised the spectre of employer discrimination against an employee and his/her/their family members. The bill could have permitted employers to access information from tests for genetic diseases with no known cure. H.R. 1313 was strongly opposed by health advocates. Advocates argued that if an employee or beneficiary’s heightened risk of developing a serious disease is revealed to an employer, the employer could use this information to predict and mitigate future healthcare costs. Such use of genetic information would violate GINA.
Although GINA has deterred employers from engaging in explicit acts of genetic information discrimination, it has not discouraged employers from engaging in the consumer genetics space by partnering with DTC genetic testing companies that offer ‘personalized wellness’ programs to employees. Personalized wellness programs’ stated goal is to help employees understand which exercise programs and diets could work best based on genetic factors. Employers, on the other hand, hope that information gleaned from genetic testing in personalized wellness programs can be used to motivate employees to engage in healthy behaviors, which might lead to overall reduced health care expenditures for the employer. There is broad agreement that genetic information – and other health data used by wellness programs – must not inform decisions regarding hiring and promotion.
Life, Long-Term Care, and Disability Insurers
The fact that GINA excludes life, long-term care, and disability insurers has been unsettling for many since GINA became effective in 2008 because it left an open door for potential discrimination in such settings. Some states like Florida have taken initiatives to fill these protection gaps, although with pushback from these insurance industries. For instance, a recent Florida bill would prohibit life insurers from making coverage decisions based on both clinical and consumer genetic testing results; it is opposed by the life insurance industry. The industry argues that the bill, if passed, could “disrupt Florida’s life insurance market and could harm consumers through higher prices and potentially limited product choices.”
In essence, these industries are concerned about their ability to underwrite and establish pricing or premiums based on genetic information alone or in combination with medical information taken from the Medical Information Bureau (MIB), an insurance underwriting and information exchange organization. MIB members include life and health insurance companies that seek to “assess an individual’s risk and eligibility during the underwriting of life, health, disability income, critical illness, and long-term care insurance policies.” Life, long-term care, and disability insurers argue that genetic information and MIB data improves their ability to accurately estimate risk, underwrite policies, and assess the likelihood of severe or critical illness, premature death, or disability. Individuals fear that the insurers will use information that is inaccurate, incomplete, or would create discriminatory outcomes for individuals or groups.
When employers consider whether to offer personalized wellness programs that involve genetic data, they should analyze vendors’ policies to ensure that vendors will appropriately secure personal information and provide meaningful privacy controls. Employers should not consider genetic data when making hiring or promotion decisions.
When employees weigh the benefits and risks of wellness programs, they should read privacy policies and terms of use agreements to determine whether programs’ policies align with their privacy expectations.
Consumers who are concerned about whether their genetic information has made it into the MIB database typically have the right to request their MIB Consumer File under the Fair Credit Reporting Act (FCRA).
US military service members should read the recent Pentagon warning memo and, if appropriate, speak with a genetic counselor about whether or not DTC genetic testing is necessary or desirable in individual cases.
Policymakers interested in genetic discrimination and data protection issues should be aware that the National Human Genome Research Institute (NHGRI) has a Table of State Statutes Related to Genomics that provides the total number of states that have enacted legislation related to genetic information privacy and nondiscrimination. The database is publicly-available and updated regularly. Additional resources suggested by the NHGRI include those offered by the Cornell Legal Information Institute, LawSeqSM Database, and National Society of Genetic Counselors. This search query on Congress.gov is a useful tool for following newly introduced state and federal legislation related to genetic information.
Researchers interested in genetic discrimination topics should consult the Final Rules for GINA and public comments about past Proposed Rules under GINA hosted by the Federal Register and searchable using the term “Genetic Information Nondiscrimination Act.”