Thermal Imaging as Pandemic Exit Strategy: Limitations, Use Cases and Privacy Implications

Authors: Hannah Schaller, Gabriela Zanfir-Fortuna, and Rachele Hendricks-Sturrup


Around the world, governments, companies, and other entities are either using or planning to rely on thermal imaging as an integral part of their strategy to reopen economies. The announced purpose of using this technology is to detect potential cases of COVID-19 and filter out individuals in public spaces who are suspected of suffering from the virus. Experts agree that the technology cannot directly identify COVID-19. Instead, it detects heightened temperature that may be due to a fever, one of the most common symptoms of the disease. Heightened temperature can also indicate a fever resulting from a non-COVID-19 illness or non-viral causes such as pregnancy, menopause, or inflammation. Not all COVID-19 patients experience heightened temperature, and individuals routinely reduce their temperatures through the use of common medication.

In this post, we (1) map out the leading technologies and products used for thermal imaging, (2) provide an overview of the use cases currently being considered for the use of thermal imaging, (3) review the key technical limitations of thermal scanning as described in scientific literature, (4) summarize the chief concerns articulated by privacy and civil rights advocates, and finally, (5) provide an in depth overview of regulatory guidance from the US, Europe and Singapore regarding thermal imaging and temperature measurement as part of the deconfinement responses, before reaching (6) conclusions.

Our main conclusions:

  1. Overview of Technologies Being Used

FLIR Systems, Inc., one of the largest makers of thermal imaging cameras, explains that the cameras detect infrared radiation and measure the surface temperatures of people and objects. They do this by measuring the temperature differences between objects. Thermal cameras can be used to sense elevated skin temperature (EST), a proxy for core body temperature, and thus identify people who may have a fever. This allows the cameras used to single out people with EST for further screening with precise tools similar to  an oral thermometer. As FLIR acknowledges, thermal cameras are not a replacement for such devices, which directly measure core body temperature.

FLIR explains that thermal cameras need to be calibrated in a lab, and be periodically recalibrated to ensure that their temperature readings match the actual temperatures of people and objects. FLIR recommends having cameras recalibrated annually. In addition to reading absolute temperatures, FLIR’s cameras have a ‘screening’ mode, where people’s temperatures are measured relative to a sampled average temperature (SAT) value. This value is an average of the temperatures of ten randomly chosen people at the testing location. The camera user then sets an “alarm temperature” at 1°C to 3°C greater than the SAT value, and the camera displays an alarm when it detects someone in this zone. As FLIR notes, a SAT value can be more accurate than absolute temperatures because it accounts for “many potential variations during screening throughout the day, including fluctuations in average person temperatures due to natural environmental changes, like ambient temperature changes.” 

The accuracy of a thermal camera’s reading is affected by several factors, including the camera’s distance from the target. FLIR suggests that the camera should be as close to the target as possible, and telephoto lenses might be appropriate for longer-range readings. The camera’s functions and settings can affect its accuracy as well and need to be appropriately configured.

Thermal imaging can be paired with various other technologies. Draganfly, Inc., a Canadian drone company, has mounted thermal sensors on what it calls ‘pandemic drones’ for broad-scale aerial surveillance. The drones are also equipped with computer vision that can sense heart and respiratory rate, detect when someone coughs or sneezes, and measure how far apart people are from one another to enforce social distancing. Reportedly, it can do all of this through a single camera from a distance of 160 feet. In a video interview, Draganfly’s CEO stated that the sensors can even distinguish between different kinds of coughing.

Thermal imaging has also been paired with facial recognition by some companies based in China, including SenseTime and Megvii. Chinese AI startup Rokid has mounted a camera on a pair of glasses that uses facial recognition and thermal imaging to identify people, measure their temperature, and record this information. In Thailand, thermal imaging has been integrated into the existing biometric-based border control system, which identifies travelers using fingerprint scans and facial recognition.

While many US locations still perform temperature screenings with handheld thermometers, interest in thermal imaging cameras is growing rapidly. Several thermal imaging companies claim to have sold thousands of units to US customers since the COVID-19 outbreak began. Thermal cameras are appealing as an exit strategy solution due to some promised advantages over handheld thermometers. They claim to detect the temperatures of many people at once, whereas handheld thermometers can only test one person at a time. They also claim to measure temperatures from a distance as people move. Theoretically, these abilities would lessen or eliminate the need for people to wait in line to have their temperatures taken, which in turn also reduces the risk of COVID-19 transmission. All of these promises should be weighed together with the limitations of the technology along with the implications to privacy and other civil rights. 

  1. Current Use Cases

Airports. Airports across the world are using thermal cameras to screen travelers. Some countries, including China, Japan, South Korea, Singapore, Canada, and India, began using them in 2002-2003 (in response to SARS) or 2009 (in response to swine flu) and continue to use them in response to COVID-19. Some airports in these countries have installed additional cameras in recent months. Other countries, like Italy, have recently begun using thermal imaging at airports for the first time. Rome’s Fiumicino Airport is testing helmets equipped with thermal cameras, worn by its staff, to detect travelers’ temperatures. Other countries have resisted this technology. In the UK, Public Health England decided that British airports will not use thermal cameras, although the CEO of Heathrow Airport was in favor of doing so. US airports who are not using thermal cameras, are evaluating the possibility of doing so. Instead, screening procedures include taking temperatures with a handheld thermometer, looking for signs of illness, and requiring travelers to fill out a questionnaire. In response to plans of the US Department of Homeland Security to check commercial airline passengers’ temperatures, a member of the Privacy and Civil Liberties Oversight Board is pressing the agency for more details, warning the global pandemic “is not a hall pass to disregard the privacy and civil liberties of the traveling public.”  

Transportation. Some Chinese cities are equipping public transportation centers with cameras that combine thermal imaging and facial recognition. Wuhan Metro transport hubs are being equipped with cameras from Guide Infrared, and Beijing railway stations are adding cameras from Baudi and Megvii. In addition, a Chinese limousine service has installed thermal cameras in its vehicles to monitor drivers and passengers. In Dubai, police are using thermal imaging and facial recognition to monitor public transport users via cameras mounted on ‘smart helmets.’

Employee Screening. Companies are using thermal cameras to screen employees for fevers. This is done broadly in China and South Korea at entrances to offices and major buildings, often using combined thermal imaging and facial recognition. Elsewhere, thermal cameras without facial recognition are increasingly used. For example, Brazilian mining company Vale SA is installing thermal cameras to screen employees entering buildings, mines, and other areas. Indian Railways installed a thermal camera from FLIR at an office entrance, among other COVID-19 mitigation measures.

Some US companies and organizations are also screening employees with thermal cameras, including Tyson Foods; Amazon, which is screening warehouse workers; and the VA Medical Center in Manchester, New Hampshire, which is scanning staff and patients. It appears that most US companies that have begun screening employees for fevers, like Walmart and Home Depot, are using hand-held thermometers

Public Facing Offices. As stated above, thermal cameras read skin temperature, and are not a substitute for temperature-taking methods that measure core body temperature. However, some locations are making decisions based solely on thermal camera readings. For example, in Brasov, Romania, a city office installed thermal cameras at its entrances, automatically denying entrance to  anyone with a temperature of over 38°C. Because thermal camera readings do not always match core body temperatures, there is a risk that people without fevers will be unfairly impacted by reliance solely on thermal camera temperature readings.

Customer and Patient Screening. Thermal cameras are growing in popularity among US businesses and hospitals as a way to screen customers and patients, respectively. A grocery store chain in the Atlanta, Georgia area is screening incoming customers using FLIR cameras. Customers with temperatures of 100.4°F or higher are pulled aside by an employee and given a flyer asking them to leave, in an attempt to handle the situation discreetly. Wynn Resorts in Las Vegas plans to screen guests at its properties and require anyone who registers a temperature of 100.4°F or higher to leave. Texas businesses and hospitals are also starting to adopt thermal cameras. Hospitals elsewhere are following this trend – for example, Tampa General Hospital in Florida now screens patients with a thermal camera system made by care.ai, a healthcare technology company.

Public Surveillance. Thermal cameras allow authorities and businesses to screen large numbers of people in real-time, making them ideal for monitoring public areas. In China, thermal cameras with facial recognition surveil many public places; some systems can even notify police of people who are not wearing masks. In several cities in Zhejiang province, police and other officials are wearing Rokid’s thermal glasses to monitor people in public spaces like parks and roadways. These glasses combine thermal imaging with facial recognition, as they also record photos and videos. Thermal sensing drones are also being used in numerous cities.

Use of thermal imaging has grown outside of Asia, too. In India, a thermal camera provider is considering installing its cameras around Delhi, both in public spaces and in businesses. Huawei has also offered thermal cameras as a solution to monitoring COVID-19 in India. Outside of Asia, in New Zealand, thermal cameras, originally developed for pest control, are being reworked to monitor for fevers in public places and are in use by some businesses. Police, in some areas of the UK, use thermal cameras to spot people breaking social distancing orders at night. The Quassim region of Saudi Arabia is monitoring the public with drones carrying thermal cameras.

It is uncommon in the US to use thermal cameras as a tool for public surveillance. However, police in Westport, Connecticut tested a Draganfly ‘pandemic drone’ to be used to measure temperatures and enforce social distancing, back in April. Westport police use drones for other purposes, but not for this kind of mass-monitoring. The program was quickly dropped when it was met with criticism by the public and the American Civil Liberties Union (ACLU) of Connecticut, which criticized the effectiveness of the drones and raised privacy concerns. Other cities that were also interested in Draganfly’s drones, like Los Angeles, Boston, and New York, may still be considering them.

In addition to drones, some US entities are reportedly considering Rokid’s thermal glasses. The company is discussing the sale of its glasses with various US businesses, hospitals, and law enforcement departments.

  1.   Technical and Other Limitations

In general, thermal imaging is used in regulated clinical settings with validated clinical protocols to diagnose or detect illness and triage patients. The use of specific thermal imaging devices to detect possible cases of COVID-19 or for other medical purposes, in general, requires US Food and Drug Administration (FDA) approval. In such cases, thermal imaging technologies would be considered by the FDA as medical devices. Concerning labeling for thermal imaging technologies, the FDA stated:

“When evaluating whether these products are intended for a medical purpose, among other considerations, FDA will consider whether: 

1) They are labeled or otherwise intended for use by a health care professional; 

2) They are labeled or otherwise for use in a health care facility or environment; and 

3) They are labeled for an intended use that meets the definition of a device, e.g., body temperature measurement for diagnostic purposes, including such use in non-medical environments (e.g., airports).”

The use of thermal imaging in non-medical environments, however, warrants the necessity to explore the technical limitations of using such technologies in high-traffic areas, like airports, for non-diagnostic yet medical purposes. 

The fact that fever or body temperature alone can be a poor indicator of viral infection or contagion complicates the validity of thermal scanning for COVID-19 surveillance. If not most of the time, fevers can be masked with over-the-counter or unrestricted treatments, such as non-steroidal anti-inflammatory drugs, that can alleviate signs of fever for up to four to six hours depending on the severity or stage of the condition. Non-infectious conditions, such as pregnancy, menopause, or inflammation, however, might also cause elevated temperature, which can render thermal scanning as highly sensitive but non-specific to any particular condition. For example, according to Johns Hopkins Medicine, hot flashes are the most common symptom of menopause, affecting 75% of all women in this stage, for up to two years. Also, confounding factors like inconsistencies or variations in viral response or strain can render thermal scanning insufficient for detecting specific types of infectious diseases like respiratory viruses.

Scientific literature suggests that reliance on public thermal scanning to detect fever is concerning from an ethical standpoint, and, given its technical limitations, is not a reliable disease surveillance strategy to support phased reopening. In a study evaluating the utility of thermal scanning in airports, researchers concluded that because the technology would be applied in a public setting unbeknownst to public passengers, controversy and complexity around matters of opt-in/out consent are inevitable. Studies have shown that thermal imaging technology can reasonably correlate core temperatures with influenza infection. However, its technical limitations render it insufficient to detect fever in settings where several individuals are moving in different directions at once, like in public settings with random, high pedestrian traffic. FDA labeling requirements are consistent with this limitation, mandating that labels acknowledge that the technology “should be used to measure only one subject’s temperature at a time.” Therefore, thermal scanning protocols would likely require structured, individual-level assessments along with non-compulsory and non-coercive (freely given) consent to be somewhat successful and feasible within public health surveillance settings that adhere to ethical standards of personal autonomy.

  1. Privacy and Civil Rights Advocates’ Concerns

Privacy and civil rights advocates in the US have raised concerns about the potential consequences of using thermal imaging such as discrimination and loss of opportunity. Since thermal imaging cannot distinguish fevers caused by COVID-19 from other causes of high body temperature, equating raised body temperature with the virus would lead to many people falsely being identified as COVID-19 risks and facing the associated downsides of that label, including discrimination. The Electronic Frontier Foundation (EFF) points out that thermal cameras are surveillance devices that can “chill free expression, movement, and association; aid in targeting harassment and over-policing of vulnerable populations; and open the door to facial recognition.” In light of the questionable effectiveness of thermal cameras, EFF cautions against using them to monitor the public at large. The ACLU of Connecticut criticized Draganfly’s drones as “privacy-invading,” and urged officials only to adopt surveillance measures against the spread of COVID-19 that are “advocated for public health professionals and restricted solely for public health use.” These concerns are also expressed in the context of fears that surveillance technologies adopted during the pandemic may remain long after their original purpose has been fulfilled.

In a recent White Paper on “Temperature Screening and Civil Liberties during an Epidemic,” the ACLU recommended that temperature screening “should not be deployed unless public health experts say that it is a worthwhile measure notwithstanding the technology’s problems. To the extent feasible, experts should gather data about the effectiveness of such checks to determine if the tradeoffs are worth it.” The ACLU further recommended that people should know when their temperature is going to be taken and  that “standoff thermal cameras should not be used.” In addition, “no action concerning an individual should be taken based on a high reading from a remote temperature screening device unless it is confirmed by a reading from a properly operated clinical grade device, and provisions should be made for those with fevers not related to infectious illness.”

  1. Regulatory Responses

In the US, regulatory responses to taking one’s temperature in non-healthcare services scenarios are primarily stemming from anti-discrimination statutory obligations. The Equal Employment Opportunity Commission (EEOC) recently revised its rules regarding the Americans With Disabilities Act in the context of a pandemic. The revisions allow employers to take employees’ temperatures during COVID-19. They also allow employers to take job candidates’ temperatures after making a conditional offer, as well as withdraw a job offer if a newly hired employee is diagnosed with COVID-19. However, the guidance does not distinguish between manual temperature checks and thermal scanning cameras. 

This distinction drives many of the regulatory responses in Europe, where multiple Data Protection Authorities (DPAs) have published guidance on checking temperatures of employees, but also of customers or pedestrians. One of the regulators that draws a clear distinction between the two types of measuring temperature is the CNIL (the French DPA). According to the CNIL, “the mere verification of temperature through a manual thermometer (such as, for example, the contactless thermometers using infrared) at the entrance of a place, without any trace being recorded, and without any other operation being effectuated (such as taking notes of the temperature, adding other information etc.), does not fall under data protection law”. 

However, things fundamentally change when thermal scanning through cameras is involved. In this sense, the CNIL issued a prohibition: “According to the law (in particular Article 9 [of the General Data Protection Regulation] GDPR), and in the absence of a law that expressly provides this possibility, it is forbidden for employers to: 

The prohibition of these two types of temperature measurement echoes guidance issued by the French Ministry of Labor in its “National Protocol for Deconfinement Measures.” Before including a prohibition for temperature measurement with the use of cameras, the Protocol relies on the findings of the High Council for Public Health that the COVID-19 infection may be asymptomatic or barely symptomatic, and that “fever is not always present in patients.” It also recalls that a person with COVID-19 can be infectious “up to 2 days before the onset of clinical signs,” and that “bypass strategies to this control are possible by taking antipyretics.” The Ministry of Labor concludes that “taking temperature to single out a person possibly infected would be falsely reassuring, with a non-negligible risk of missing infected persons.” 

The Spanish DPA takes the position that taking the temperatures of individuals to determine their ability to enter the workplace, commercial spaces, educational institutions, or other establishments, amounts to processing of personal data without making any distinction in its guidance between manually held thermometers and thermal imaging. It seems to focus on the purposes for which individual measurement of temperature is used when making this assessment. The Spanish DPA highlights in its detailed guidance that “this processing of personal data amounts to a particularly severe interference in the rights of those affected. On one hand, because it affects data related to health, not only because the value of the body temperature is  data related to health by itself, but also because, as a consequence of that value it is assumed that a person suffers or not from a disease, in this case a coronavirus infection.” 

The Spanish DPA also notes that the consequences of a possible negation to enter a specific space may have a significant effect on the person concerned. Therefore, it urges organizations to consider, among other measures, properly informing workers, visitors or clients about temperature monitoring. They should also allow those individuals with a higher than normal temperature to object to a decision that impedes their access in a specific place in front of personnel who are qualified to assess possible alternative reasons for the high temperature and can allow access where justified. It is also relevant to note that, when it comes to lawful grounds for processing, the Spanish DPA does not deem consent and legitimate interests as appropriate lawful grounds. The processing needs to be based either in a legal obligation or in the interest of public health, ensuring that the additional conditions required by these two lawful grounds are met.

The Italian DPA (Garante) takes the position that taking one’s “body temperature in real time, when associated with the data subject’s identity, is an instance of processing personal data.” As a consequence of this fact, the DPA states that “it is not permitted to record the data relating to the body temperature found; conversely, it is permitted to record the fact that the threshold set out in the law is exceeded, and recording is also permitted whenever it is necessary to document the reasons for refusing access to the workplace.” This rule applies in an employment context. Where the body temperature of customers or occasional visitors is checked, “it is not, as a rule, necessary to record the information on the reason for refusing access, even if the temperature is above the threshold indicated in the emergency legislation.” 

It is important to highlight here that in the case of Italy, there is special legislation adopted for managing the COVID-19 pandemic that mandates temperature taking by “an employer whose activities are not suspended (during the lockdown – n.)” to comply with the measures for the containment and management of the epidemiological emergency. This special legislation acts as a lawful ground for processing. Once the legislation expires or becomes obsolete, taking the temperature of employees or other individuals entering a workplace will likely remain without a lawful ground. According to the Garante, another instance where special emergency legislation allows for temperature measurement is in the case of airport passengers. It should also be noted that neither the Garante’s guidance, nor the special legislation mentioned above make a distinction between manual temperature taking and the use of thermal cameras. 

By contrast, the Belgian DPA takes the position that “the mere capturing of temperature” is not a processing of personal data, without distinguishing between manual temperature taking and the use of thermal cameras. Accordingly, the DPA issued very brief guidance stating that “if taking the temperature is not accompanied by recording it somewhere or by another type of processing, the GDPR is not applicable.” It nonetheless reminds employers that all the measures they implement must be in accordance with labor law as well as the guidance of competent authorities. 

The Dutch DPA warned controllers that want to measure the temperature of employees or visitors about the uncertainty of detecting COVID-19 by merely detecting a fever. It also advised that “taking temperatures is not simply allowed. Usually you use this to process medical data. And this falls under the GDPR.” According to the Dutch DPA, “the GDPR applies in this situation because you not only measure someone’s temperature, but  you also do something with this medical information. After all, you don’t measure for nothing. Your goal is to give or deny someone access. To this end, this person’s temperature usually has to be passed on or recorded somewhere so that, for example, a gate can open to let someone in.” In further guidance on the  question of whether temperature measurement falls under the GDPR, the DPA explained that “a person’s temperature is personal data. (…) The results (of temperature measurement – n.) will often have to be passed on and registered somewhere to allow or deny someone access. Systems in which gates open, which give a green light or which do something automated on the basis of the measurement data are also protected by the GDPR.” The DPA also states that even when the GDPR is not applicable in those cases where the temperature is merely read with no further action, a breach of the right to privacy or of other fundamental rights might be at issue: “The protection of other fundamental rights, such as the integrity of the body, may also be expressly at stake. Depending  on how it is set up, only measuring temperature can indeed be illegal.”

The UK Information Commissioner’s Office (ICO) warns organizations that want to deploy temperature checks or thermal cameras on site that “when considering the use of more intrusive technologies, especially for capturing health information, you need to give specific thought to the purpose and context of its use and be able to make the case for using it. Any monitoring of employees needs to be necessary and proportionate, and in keeping with their reasonable expectations.” However, it does seem to allow such practices in principle, but only after a Data Protection Impact Assessment is conducted. The ICO states that it worked with the Surveillance Camera Commissioner to update a DPIA template for uses of thermal cameras. “This will assist you thinking before considering the use of thermal cameras or other surveillance,” the ICO adds. 

The Czech DPA also adopted specific guidance for the use of thermal cameras and  temperature screening, taking the position that data protection law is applicable only when “the employer intends to record the performed measurements and further work with data related to high body temperature in conjunction with other data enabling the identification of the person whose body temperature is being taken.” As opposed to the Spanish DPA, which found that legitimate interests cannot be a lawful ground for processing such data, the Czech DPA suggests that employers can process the temperature of their employees on the basis of legitimate interests, paired with one of the acceptable uses for processing health data under Article 9(2). The DPA further advises that the necessity of such measures needs to be continuously assessed and warns that “measures which may be considered necessary in an emergency situation will be unreasonable once the situation returns to normal.”

In Germany, the Data Protection Commissioner of Saarland has already started an investigation into a supermarket which installed thermal cameras to select customers with normal temperatures for its premises, after declaring to the media that “the filming was used to collect personal data, including health data, in order to identify a potential infected person,” and this measure breached the GDPR and the right to informational self-determination. According to media reports, the supermarket decided to suspend the thermal scanning measure. In addition, the DPA of Rhineland-Phalz notes in official guidance that “the mere fact that an increased body temperature is recorded does not automatically lead to the conclusion that COVID-19 is present. Conversely, an already existing coronavirus disease does not necessarily have to be identified by an increased body temperature. Therefore, the suitability of the body temperature measurement is in doubt.” The DPA suggests that alternative measures should be implemented by employers to comply with their duty of care towards the health of employees, such as working from home whenever possible or encouraging employees to seek medical advice at the first signs of disease. The DPA of Hamburg is more precise and clearly states that “neither the use of thermal imaging cameras nor digital fiber thermometers to determine symptoms of illness is permitted” to screen persons to enter shops or other facilities. This can only be offered to individuals as a “voluntary service.” 

It seems that all DPAs which issued guidance on this matter have determined that  thermal scanning and temperature management are particularly intrusive measures. But their responses vary, from a clear prohibition to use thermal cameras for triaging people (CNIL, Hamburg DPA), to allowing thermal scanning in a quite restricted way (Spanish DPA), to possibly allowing video thermal scanning by default as long as a DPIA is conducted (UK ICO), to making a point about hand-handled temperature measurement as not falling under data protection law (Dutch DPA, Belgian DPA, Czech DPA, CNIL), to not making any differentiation between hand-handled temperature measurement and video thermal scanning when allowing such measures (Italian DPA). The European Data Protection Board (EDPB) has not yet issued specific guidance on the use of thermal cameras or, generally, on the measurement of temperature. Given the diversity in approaches taken by European DPAs, it may be necessary for the EDPB to provide harmonized guidance. 

Elsewhere in the world, the Singaporean Personal Data Protection Commission advises organizations that “where possible, deploy solutions that do not collect personal data. For instance, your organisation may deploy temperature scanners to check visitors’ temperature without recording their temperature readings, or crowd management solutions that only detect or measure distances between human figures without collecting facial images.”

  1. Conclusion

This article provides a comprehensive overview of the use cases for thermal scanning cameras, their technical and medical limitations, the civil rights concerns surrounding them, and the up-to-date regulatory responses to their use in the fight against the spread of COVID-19 as countries are entering the first “deconfinement” stage in this pandemic. Organizations considering the deployment of temperature measuring as part of their exit strategies should carefully analyze whether the benefits of such measures outweigh the risks of discrimination, loss of opportunity, and the risks to the civil rights of the individuals who will be subjected to this type of screening en masse. Advice from public health authorities, public health specialists, and other regulators should always be part of this assessment, as well as consulting individuals who will be subjected to these measures as part of learning about their legitimate expectations when it comes to safety in the current stage of the pandemic versus other rights.   

The authors thank Charlotte Kress for her research support. 

For any inquiries, the authors can be contacted at [email protected] or [email protected]

Bipartisan Privacy Bill Would Govern Exposure Notification Services

Authors: Stacey Gray, Senior Counsel; Katelyn Ringrose, Christopher Wolf Diversity Law Fellow; and Polly Sanderson, Policy Counsel


Yesterday, Senators Cantwell (D-WA), Cassidy (R-LA), and Klobuchar (D-MN) introduced a new COVID-19 data protection bill, the Exposure Notification Privacy Act, which would create legal limits for “automated exposure notification services.” The bill comes on the heels of Republican and Democratic-led bills introduced earlier this month that would govern COVID-19 data much more broadly.

In contrast, the Exposure Notification Privacy Act would specifically regulate “exposure notification” apps, primarily mobile apps that enable individuals to receive automated alerts if they have been exposed to COVID-19. Such apps often harness Bluetooth, location data or other information from phones, to enable automated alerts for users who have come into contact with an asymptomatic person who is later diagnosed with COVID-19. The Center for Disease Control has described exposure notification systems as a complement to traditional manual techniques used to monitor the spread of COVID-19.

As cities and states begin to reopen, many public health authorities are working with private companies or not for profits to develop these apps. Large employers are also considering using exposure notification services as part of “back to work” strategies to help ensure safe working environments. In order for automated exposure notifications to be highly effective, it is estimated that 40-60% of a given population would need to install such an app. (However, contact tracing may work at much lower levels than most people think). However, recent research shows a marked lack of trust among the American population when it comes to their digital privacy amid COVID-19. For these reasons, if exposure notification methods are to be effective, trust and adoption are crucial.

“Exposure notification services can support the work of public health agencies and can help employers keep workplaces safe, but only if they are designed and implemented with privacy in mind and in the public interest. The Cantwell-Cassidy bill guarantees that data collected by mobile apps is protected by strong legal safeguards, in addition to technical measures companies put in place.” – Jules Polonetsky, CEO, Future of Privacy Forum

Below, FPF summarizes the core provisions of the Exposure Notification Privacy Act, which, if passed, would become effective immediately. If adopted, it would codify core data protection principles, such as purpose limitation. We describe below the Act’s: (1) jurisdictional and material scope; (2) obligations for covered entities; (3) anti-discrimination provisions; and (4) federal and state enforcement and oversight.

The full text of the Exposure Notification Privacy Act can be found HERE.

The section-by-section of the bill can be found HERE.

The one-pager of the bill can be found HERE.

Jurisdictional and Material Scope

Unlike other COVID-19 privacy bills recently introduced, the Exposure Notification Privacy Act has a narrow scope–applying only to entities that collect data through “automated exposure notification services,” i.e., mobile apps that enable automated alerts to those who may have been exposed to COVID-19. 

Covered entities include commercial businesses, non-profits, and common carriers; collecting or processing data that is “linked or reasonably linkable to [any] individual or device linked or reasonably linkable to an individual.” Although the bill does not contain an explicit exemption for de-identified data, covered data does not include “aggregate data.”

Importantly, this bill would not apply to the various technologies, including mobile apps, that enable traditional manual contact tracing, i.e., tracing that involves public health experts interviewing a diagnosed person and contacting friends and family who may have been exposed. For example, New York City is partnering with Salesforce to assist manual contact tracers by deploying a call center as well as a customer relationship and case management system. San Francisco and Massachusetts have also been ramping up manual contact tracing efforts. Many of those are already subject to restrictions mandating confidentiality for public health agencies.

In addition, this bill would not affect state and local government entities who are developing and implementing automated exposure notification services “in house,” without partnering with private companies or non-profits. Generally, the federal government cannot directly regulate local governments engaged in traditionally local activities such as public health. 

Obligations of Covered Entities

Under this bill, commercial entities or nonprofits that operate “automated exposure notification services” would be subject to strict legal requirements. Many of the bill’s requirements are consistent with the requirements for COVID-19 apps set by the App Store and Google Play. As a result, app developers using the API created by Google and Apple should already be substantially in compliance.

These obligations include:

Anti-Discrimination Provisions 

In addition to obligations on app providers, the bill features strong anti-discrimination provisions that would apply to restaurants, educational institutions, hotels, retailers, and other places of “public accomodation” (as defined in Section 301 of the Americans with Disabilities Act). If passed, the bill would make it unlawful for these kinds of establishments to use data from such automated exposure notification services to deny people entry, services, or otherwise discriminate against them. 

This would likely prevent these kinds of notification apps from being repurposed as immunity passports, at least to the extent that they are used to disallow someone from using public spaces “based solely on data collected or processed through an exposure notification service or an individual’s choice to use or not use” such a service. Immunity passports are methods for individuals to verify their “risk status” with respect to COVID-19 – i.e., that they have not been exposed, or are not showing symptoms for purposes of travel and work. Immunity passports have been widely criticized for their potential lack of efficacy, as well as their disparate impact on the basis of class and race. 

Enforcement and Oversight

The Exposure Notification Privacy Act’s requirements would be enforced by the Federal Trade Commission (FTC) and State Attorneys General (AGs). A violation of the bill would be treated as a violation of the FTC’s prohibition against unfair or deceptive acts or practice under the FTC Act (15 U.S.C. 57(a)(1)(B)). The bill also preserves existing rights of individuals under other federal and state laws, including consumer protection laws, civil rights laws, or common law. We expect further discussion in Congress around the issue of one federal standard, given the expected inter-state interoperability of many of the exposure notification apps. The Exposure Notification Privacy Act would become effective on the date of enactment. 

This bill would also extend the purview of the Privacy and Civil Liberties Oversight Board (PCLOB) to federally declared public health emergencies as well as federal actions used to combat terrorism. PCLOB is an independent executive branch agency that is currently tasked with ensuring that federal efforts to protect the U.S. from terrorism appropriately safeguard privacy and civil liberties.

Looking Ahead

As governments around the world grapple with “back to work” strategies for 2020 and beyond, many are considering whether and how to use exposure notification services to help contain the virus. Senator Cantwell’s proposal offers a promising legal model to build much-needed trust in such services. 

In the United States, public health authorities in North Dakota, South Dakota, Utah, Georgia, California, and others are working with private companies to develop contact tracing services. Abroad, Canada recently released “Privacy Principles for Contact Tracing,” Australia has enacted legislation for their Covidsafe tracing app to allay privacy concerns, and the UK has created a Data Ethics Advisory Board for the NHS COVID-19 App.

Meanwhile, Google and Apple have partnered to provide the interoperability and API access needed for Bluetooth-powered exposure notification services to function effectively. Both companies have outlined strict standards for apps deploying this new API, in addition to creating guidelines for any COVID-19 related apps, including those that offer medical advice, education or training services, and social support. 


Did we miss anything? Let us know at [email protected] as we continue tracking developments related to exposure notification services.

Image Credit: Photo by Mika Baumeister on Unsplash

FPF Releases New Report on GDPR Guidance for US Higher Education Institutions

Today, FPF released The General Data Protection Regulation: Analysis and Guidance for US Higher Education Institutions by Senior Counsel Dr. Gabriela Zanfir-Fortuna. The new report contains analysis and guidance to assist United States-based higher education institutions and their edtech service providers in assessing their compliance with the European Union’s General Data Protection Regulation (GDPR).

The GDPR, which went into effect two years ago this week on May 25, 2018, grants individuals’ certain rights to control how their personal data is collected and used, and imposes steep penalties on organizations found to be noncompliant. When the GDPR came into effect, there was limited guidance and decisions available to help US higher education institutions and edtech companies in understanding their obligations. Now, two years into the regulation’s implementation, there is significant guidance that can be analyzed and applied. The law applies to most U.S.-based higher education and edtech companies, as these have some type of interaction with EU residents, whether students, faculty, alumni, or through study abroad programs or other initiatives.

The ten practical steps to begin a GDPR compliance program, detailed in the report“With this report, we hope to support higher education institutions and edtech companies to solidify trust in the way they are handling personal data of students, prospective students, and faculty, by implementing data practices that fully take into account privacy and data protection requirements”, said Dr. Zanfir-Fortuna, the author of the report, who is Senior Counsel for the Future of Privacy Forum.

Dr. Zanfir-Fortuna has written extensively about GDPR, including in two blog posts from fall 2019, “10 Reasons Why the GDPR is the Opposite of a Notice and Consent Type of Law,” and “Key Findings from the Latest Right to Be Forgotten Cases” as well as an in-depth report, produced by FPF in partnership with Nymity, on the use of “legitimate interests” as a lawful ground for data processing under GDPR. She is also a co-author of the GDPR Commentary published this year by Oxford University Press. Additionally, FPF published a comparison of the key differences between GDPR and the California Consumer Privacy Act (CCPA) to support organizations navigating compliance with both laws.

Amelia Vance, FPF’s Director of Youth & Education Privacy, cautioned that many U.S.-based institutions remain unprepared, despite the high stakes.

“As higher education institutions around the country navigate this unprecedented time, including a rapid transition to online learning and administration, it is critical to remain vigilant about data protection and privacy requirements,” said Vance. “An effective compliance program requires continuous attention and evolution. The consequences of losing sight of that now – potentially millions of dollars under GDPR – are significant, even during these uncertain times.”

The report includes a 10-step checklist with instructions for executing an effective GDPR compliance program. It is designed to assist both organizations with established compliance programs seeking to update or refresh their understanding of their obligations under GDPR, as well as those that are still in the process of creating or sustaining a compliance structure and seeking more in-depth guidance.

About FPF

The Future of Privacy Forum (FPF) is a Washington, DC-based think tank that seeks to advance responsible data practices. The forum is led by Internet privacy experts and includes an advisory board comprised of leading figures from industry, academia, law, and advocacy groups. For more information, visit www.fpf.org.

Media Contact

[email protected]

Tech Talk with the Regulators – Understanding Anonymization Under the GDPR

The General Data Protection Regulation (GDPR) has already been in existence for four years, and has been in force for two years. How can anonymization techniques under the GDPR help Data Protection Officers (DPOs) assess innovation? I hosted a webinar with Truata that featured experts from DPAs in Italy, Ireland, and the UK to find out more about their perspective.

The recording is available here (link to the webinar).

‘A revision of the 2014 opinion on anonymization techniques is in the working program of the EDPB

In 2014, the European data protection authorities, assembled in the Article 29 Working Party provided guidance in their opinion on anonymization techniques. Giuseppe D’Acquisto, Senior Technology Advisor at the Italian Data Protection Authority, said that some adjustments to the 2014 guidance are needed because there are unexplored aspects of anonymization in the GDPR: “A revision of the 2014 opinion is in the working program of the EDPB.”

Ultan O’Carroll, Deputy Commissioner for Technology and Operational Performance at the Data Protection Commission in Ireland, said: “The 2014 opinion is still as valid as it ever was, if not more so.”

O’Carroll: “The principles and rights that we talked about in the 2014 opinion, and the risks that were identified – all of which have materialized in the GDPR – are particular about the importance of singling out, for instance. So, the guidance continues to be relevant and impactful, and outlines considerations for data controllers as they use and attempt to anonymize data.”

Unexplored aspects of anonymization in the GDPR’

D’Acquisto gave three examples where in his view the use of Privacy Enhancing Technologies (PETs) could play a role.

  1. On legitimate interest as a legal ground: “Anonymization techniques can become an element in the balancing test when you want to invoke legitimate interest.”
  2. On public interest as a legal ground: “Public interest is an opportunity when used in combination with national law.” He called on national legislators to explore the possibility of including the use of privacy-enhancing safeguards in laws.
  3. On the secondary, (in)compatible use of personal data for further processing: “Rethinking the 2014 opinion is useful to explore new opportunities for data controllers.”

D’Acquisto’s last remark came against the background of Recital 50 of the GDPR. It clarifies Article 6 of the GDPR which stipulates the lawfulness of processing. Recital 50 states that the processing of personal data for purposes other than those for which the personal data were initially collected should be allowed only where the processing is compatible with the purposes for which the personal data were initially collected. In order to ascertain whether a purpose of further processing is compatible with the purpose for which the personal data are initially collected, the controller should take into account the existence of appropriate safeguards in both the original and intended further processing operations. D’Acquisto stressed that value could be added to data in the interest of the public when applying anonymization techniques as safeguards for our rights and freedoms.

‘Time to focus on privacy risk management’

Simon McDougall, Executive Director for Technology Policy and Innovation at the Information Commissioner’s Office in the UK, said that it is time to focus on privacy risk management: “There is a tension between risk management and hard science. We can now quantify re-identification risk, for example. The problem is that most people do not understand risk. They struggle with the concept of residual risk and the question of what risk to accept.”

McDougall stressed that the challenge today is about communication: “We now have a better understanding of how to manage (residual) risks and bring them back to an acceptable level.”

He also explained the benefits of a layered approach to privacy risk management, rather than a focus on a single technology. “Think of it as a Swiss cheese notion of [stacked] risk management measures,” McDougall said. “A layered approach to control prevents risk from passing through various risk mitigation measures because the holes do not line up.”

While it is important to keep up with the cutting edge of anonymization technologies in order to understand the full scope of possibilities, McDougall implored the audience to “look at the basics, not just the cutting-edge material.”

‘Legal and technical competences are complementary to each other’

From the discussion, it became clear that to move the use of anonymization techniques forward, Data Protection Officers (DPOs) have an important role to play. The broader questions around innovation, sharing of data, and repurposing of data have become particularly important in the context of COVID-19. Accordingly, each of the experts expressed their advice for DPOs given the developments in anonymization technologies.

D’Acquisto suggested that DPOs should not rely on either legal or technical competence alone. “Both competencies are important in order to tackle the complex aspects of anonymization techniques,” he said. “If we look at one, we miss the opportunities of the other. Each is complementary to the other. A holistic approach is needed with legal safeguards, technical safeguards, and a path toward compliance.”

‘DPOs: do not go alone; get help’

O’Carroll added that it is essential that DPOs not act in isolation. “DPOs need to get access to scientists and to organizational people, but also to expert advice in terms of social science, cognitive science, interface design, or mathematics, for example. Do not go alone; get help,” he said. “And be sure that what you’re presenting is robust. Take your time to do that. It’s not worth carrying forward without that because you’ll be asked questions that you may not think about.”

In closing, McDougall remarked that “DPOs should think about themselves as intelligent customers. Anonymization technologies are very complex. A DPO should be able to have conversations with experts. Instead of thinking this is all incredibly complicated, they should try to understand what the risks are for the individual and the organization. But it is possible to follow, to understand the principles, to understand that the levels of risk and the technology itself are changing all the time. It is possible to keep up with it so you can then have the conversation with the right expert.”

 

To learn more about FPF in Europe, please visit fpf.org/eu.

 

Apple & Google Update Terms for COVID-19 Apps

While the legislative process for COVID-related data is advancing, the business and platform standards for apps are falling in place.

Rules for All COVID Apps

In response to the coronavirus pandemic, developers are rushing to create applications to support healthcare systems, spread awareness in the community, guide epidemiological research, and mitigate the spread. Apple and Google have reemphasized their existing policies in app development, data protection, and appropriate content, and have bolstered requirements with direct specifications regarding COVID-19 apps.

The new Bluetooth Exposure APIs created by Apple and Google have received a significant amount of attention. Due to the expanded access to Bluetooth scanning enabled by these APIs, both companies have limited use of these APIs to apps provided by health departments. Apple and Google require that the apps must be voluntary, cannot collect location information, and only gather very limited additional user information. For more information about the Apple/Google Exposure Notification API, read more about the details provided by Apple and Google.

However, new App Store and Google Play terms updated by the companies also set new rules for ALL COVID-related apps. 

Apple’s COVID-19 App Requirements

In a March 14 post, Apple explained that it would evaluate COVID-19 apps critically, ensuring that data sources are reputable. 

A COVID-19-specific app will only be considered if developers represent recognized entities such as “government organizations, health-focused NGOs, companies with clear credentials in health issues, and medical or educational institutions. Any entertainment or game apps with COVID-19 as their theme will be prohibited.” 

In addition to these new requirements, Apple’s updated policy also describes relevant limitations on certain types of apps. 

  1. According to a new section 5.1.1.ix., apps in “highly-regulated” fields, such as healthcare, financial services, and air travel, need to be submitted by a “legal entity that provides the services, and not by an individual developer.” 
  2. Safety consistently remains a top priority for Apple’s restrictions. Section 1.4.1 prohibits medical apps from providing inaccurate data or information. Apple’s team will utilize greater scrutiny when evaluating apps that could be used for diagnosing or treating patients. Developers must be reachable by users to respond effectively to questions and support issues.
  3. Privacy policies in the App Store call for transparency about all data collection, retention, and sharing. If the app collects health-related data, this personal data may not be used for advertising, nor can the app store this information in iCloud. However, this information may be used to improve health management or support relevant research if the user (or guardian of a minor) consents. Consent requirements are as follows:
    1. nature, purpose, and duration of the research
    2. procedures, risks, and benefits to the participant
    3. information about confidentiality and handling of data (including any sharing with third parties)
    4. a point of contact for participant questions
    5. the withdrawal process
  4. Apps conducting health research must obtain permission from an independent ethics review board. 

Just like any application available in the App Store, many preexisting relevant restrictions continue to apply to new apps related to COVID-19. 

Google’s COVID-19 App Requirements

Google also posted updated guidelines for COVID apps, which it defined as follows:

Apps that are subject to these requirements include, but may not be limited to:

  1. Apps that use, approximate or leverage coronavirus, COVID-19, pandemic, or related keywords in their Google Play Store listing metadata. 
  2. Apps that provide medical, treatment, vaccine, testing, or other related information specifically for COVID-19.
  3. Apps that support COVID-19-related response, containment, research, or education/training efforts.
  4. Apps that support services used to respond specifically to COVID-19, for example, apps that provide social support (food stamps, payment), healthcare, loans, etc., specifically in response to COVID-19.

The new Google Play rules state that only the following categories of apps are eligible to use COVID-19 or other related keywords and marketing in their Google Play Store app listing:

  1. Official governmental apps, which connect users to authoritative information and services. 
  2. Apps published by, or in direct affiliation with:
    1. a healthcare system or provider (e.g. CVS Health, UK National Health Service, UnitedHealth Group, Kaiser Permanente, French national healthcare system, Netcare (South Africa), One Medical, etc.); 
    2. a nationally recognized medical or epidemiological research organization deeply rooted in medical research (including nationally recognized medical schools). (The medical or epidemiological research organization or government research, should have approval from a registered governing body (for example, Institutional Review Board in the US, or the National Health Service (NHS) in the UK). In case of dispute, a local or national government, or verifiable healthcare non-governmental organization (NGO) endorsement will be required.) or; apps directly endorsed by an official. The app must be directly published by or in direct partnership with one of the entities (e.g. the authorizing institution or organization is referenced, with full permission, in the app’s title, logo, or Google Play Store description). Endorsement by a non-government entity alone does not meet the qualification (e.g., an app endorsed by staff at a medical school would not qualify if that app is not published by or in direct partnership with the medical school).

Apps specifically created in response to COVID-19 may not access personal and sensitive data that is not required to directly support pandemic response efforts and epidemiological research. The privacy policy must outline this data use.  COVID-19 apps that handle personal user data must also strictly comply with other existing Google Play Policy requirements

Google also reiterated its restrictions to content regarding sensitive events, misrepresentation, and deceptive behavior. COVID-19 apps cannot contain unverifiable information that counteract efforts of community education and relief.

New Infographic Illustrates Key Aspects of Location Data

Today, the Future of Privacy Forum (FPF) published an infographic, “The World of Geolocation Data” that outlines how location data is generated from mobile devices, who has access to it, and factors to consider in evaluating privacy risks. Data from our mobile devices, including smartphones and fitness trackers, can serve as a proxy for where we are located over time, revealing intimate information about individuals and groups.

“During the COVID-19 pandemic, many are interested in employing both location data and proximity signals from smartphones to track the spread of the virus and measure adherence to social distancing guidelines,” said Stacey Gray, FPF Senior Counsel. “We’re helping policymakers and public health officials understand location data so they can make proactive, knowledgeable choices about the use of this sensitive information.”

The infographic shows how mobile devices interpret signals from Wi-Fi and Bluetooth networks, cell towers, and GPS satellites to pinpoint their location, as well as how that data is analyzed by the mobile operating system to provide precise measurement to mobile apps upon request. The graphic describes the different entities that are able to access, use, or share various types of location data, including cell phone carriers, mobile apps and app partners, and downstream recipients. Finally, the graphic describes the factors that make location data more or less risky including persistence and frequency, precision, accuracy, known or sensitive locations, and the use of de-identifying technologies. 

Stacey Gray, Senior Counsel at FPF and the author of the infographic, will host a webinar to help policymakers better understand the complicated ecosystem for device location data on Tuesday, June 2nd at 12 PM EDT. The webinar will include an expanded discussion of the infographic, will answer questions about evaluating and mitigating risks in real-world location datasets, and will feature technical and legal experts, including Shane Wiley, CPO of Cuebiq; Kara Selke, VP of Commercial Development and Privacy at Streetlight Data; as well as Chelsey Colbert, Policy Counsel at FPF and Dr. Rob van Eijk, FPF’s Managing Director for Europe. To register for the event, click here.

Other recently-published resources from FPF related to privacy and the coronavirus pandemic include: 

The full list of FPF’s privacy and pandemics resources can be accessed on the FPF website at fpf.org/privacy-and-pandemics.

About FPF

The Future of Privacy Forum (FPF) is a non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting fpf.org. 

Understanding the "World of Geolocation Data"

How is location data generated from mobile devices, who gets access to it, and how? As debates over companies and public health authorities using device data to address the current global pandemic continue, it is more important than ever for policymakers and regulators to understand the practical basics of how mobile operating systems work, how apps request access to information, and how location datasets can be more or less risky or revealing for individuals and groups. Today, Future of Privacy Forum released a new infographic, “The World of Geolocation Data” that explores these issues.

In this infographic, we demonstrate how mobile devices, such as smartphones, interpret signals from their surroundings – including GPS satellites, cell towers, Wi-Fi networks, and Bluetooth – to generate a precise location measurement (latitude and longitude). This measurement is provided by the mobile operating system to mobile apps through a Location Services API when they request it and receive the user’s permission. As a result, apps must comply with the technical and policy controls set by the mobile operating systems, such as App Store Policies.

Many different entities (including, but not limited to mobile apps) provide location features or use location data for a variety of other purposes. Different entities are subject to different restrictions, such as public commitments, privacy policies, contracts and licensing agreements, user controls, app store policies, and sector-specific laws (such as telecommunications laws for mobile carriers). In addition, broadly applicable privacy and consumer protection laws will generally apply to all commercial entities, such as the California Consumer Privacy Act, or the Federal Trade Commission Act (FTC Act).

Finally, in addition to legal and policy controls, location datasets can be technically modified to further mitigate risks to individuals and groups. Some of those practical mitigation steps might include:

Future of Privacy Forum Partners with Dublin City University

Today, the Future of Privacy Forum (FPF) and Dublin City University (DCU) have announced a new partnership that will see them host joint conferences and workshops, collaborate on research projects, develop resources for policymakers, and pursue applications for research opportunities together over the next three years.

“Partnering with DCU will allow us to collaborate with some of the world’s leading experts on AI and other innovative technologies to ensure data protection, privacy and ethics remain a priority for research and new products.,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “FPF is expanding its presence in Ireland because individuals in the US and EU share common values about both privacy and data protection challenges as well as the opportunities data enables to make our lives better.”

DCU is home to some of the leading AI-focused research and scholarship programs in Ireland. DCU is a lead university for the Science Foundation Ireland ADAPT program, and hosts the consortium leadership for the INSIGHT research centre, two of the largest government funded AI and tech-focused development programs.

“Our partnership with the Future of Privacy Forum will be a valuable asset as DCU helps craft the strategy to keep Ireland a global leader in developing artificial intelligence and other technologies,” said Professor Lisa Looney, Executive Dean of the Faculty of Engineering and Computing at DCU. “Leaders in government and in industry respect FPF for its expertise on the best approaches to balance individual privacy and the benefits of new technology applications.”

FPF will be partnering with DCU on a proposal for a SFI Industry-Academia project on data governance with tech platforms and SFI research centers across Ireland. FPF and The Faculty of Engineering and Computing also plan to engage in joint research via EU funding, student projects and national funding like SFI ADAPT and INSIGHT research centers. Engineering and Computing launched a campus-wide Ethics and Privacy week event this year and will work with FPF to make this an annual event and extend its reach to undergraduates across all disciplines as well as the DCU research community.

FPF has built strong partnerships across Europe through its convening and trainings for policymakers and regulators. To learn more about FPF’s EU work, head to fpf.org/eu.

CONTACT

Nat Wood

[email protected]

(410) 507-7898

About FPF

The Future of Privacy Forum (FPF) is a non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting fpf.org. 

About Dublin City University

DCU is Ireland’s fastest growing university. It has seen its student population increase by 50% in the past five years, to over 17,500 students. It has forged a reputation as Ireland’s university of enterprise, through its strong, active links with academic, research, and industry partners both at home and overseas.

DCU has five faculties incorporating 23 schools spread across three academic campuses, located in Glasnevin and Drumcondra, on the Northside of Dublin City. 

DCU develops highly sought-after, well-rounded graduates who are ready for the workforce and eager to apply their knowledge and skills in a broad range of settings. For more information, please visit www.dcu.ie

FPF CEO: Will I Install an Exposure Notification App? Thoughts on the Apple-Google API

As a privacy expert, if my local health department develops a mobile app for people with a COVID diagnosis to alert anyone they were near, will I use it?

Yes, I will. And I will urge friends, neighbors and colleagues to download such an app. I have an immuno-compromised family member in my household. I am also lucky to live one block from my senior citizen in-laws. If a health department app can inform me that I am possibly at risk, I can take measures to keep them safe from me. I want that app to be built with privacy protections in place, collecting only the data needed and deleting it as soon as possible. Today, Apple and Google have launched new capabilities for health department apps, with strict technical privacy restrictions to try to provide these apps with the ability to scan for nearby devices and to delete data in 30 days.

In my home state of Maryland, Governor Hogan is seeking to quadruple the current staffing to 1,000 state employees and outside contractors supporting manual contact tracing, but hiring and training will take time. Contact tracing relies on interviewing people about who they may have come into contact with recently and then painstakingly finding contact information needed to contact everyone of those potentially exposed individuals. It also relies on people to accurately remember all of their interactions. Can you remember the people you stood next to on the long line at the grocery store last week?

Should my health department offer an app to supplement this process? I hope they will look closely at the way apps have been used by health departments for exposure notification around the world and decide whether it would be a useful supplement to the human contact tracing effort they are setting up.

In an ideal world, we would have a national response that deployed hundreds of thousands of human contact tracers, so that use of an app would be a very minor supplemental option. Exposure notification apps would be tested for efficacy in a careful controlled study. The CDC would be working with the WHO to advise based on the results of studies of the app efforts in Singapore, Israel, Hong Kong, South Korea and elsewhere. We might learn if they are helpful and what data they need. Do health department apps need precise location, despite the risks of revealing the private activities of individuals? Can the apps rely solely on information from Bluetooth about proximity to nearby phones to be effective? Are the apps effective if they are voluntary and work in a decentralized manner? What is the risk of abuse of data collected in countries without strong data protection legislation or countries with dangerous human rights records? But we do not live in a perfect world, and timely preventive measures can save lives today.

I realize that the data may be imprecise, untested, imperfect. I will look to my reasonably competent health department for guidance. I realize I am privileged in this regard. If I get an alert, I can work from home and be paid. I can err on the side of safety out of caution. Many can not. I realize that not everyone has a smartphone, so this is not a service that all can benefit from, but it is one of the most widely adopted technologies in the world. I hope we can find ways to ensure everyone can have access and that we can address economic and racial disparities.

I vote, donate and actively campaign for candidates who I hope will work to make society more just. I have served in government at the city, state and federal level and have been elected to office and have been appointed to office. But in an imperfect world and during an emergency, we all need to make the most ethical decision with the facts at hand. Relying on such apps is in my view a potentially helpful supplemental safety measure that fills a gap created by the current challenges.

Let’s turn to what Apple and Google should be doing to support local health departments. First, let’s note that Apple and Google haven’t invented the idea of using a phone for exposure notification or contact tracing during this pandemic. Health departments in countries that moved quickly to respond to the outbreak quickly commissioned apps that used the mobile phone location services, and sometimes Bluetooth capabilities and promoted them to their local populations. But it turns out that due to privacy settings and power limitations, mobile phones aren’t the most effective tool for the highly precise information collection needed for tracing. These privacy protections have been baked deep into the devices operating system, due to years of work to prevent misuse by human rights abusing governments, stalkers and criminals and by advertisers and marketers.

Another current interoperability problem that the Google-Apple API will solve for is that existing exposure notification apps are often not interoperable with each other. If a person downloads an app from one public health authority but then comes into contact with a user of an app from another jurisdiction, the apps often will not recognize one another. However, all apps using the Apple-Google API will recognize one another. This type of scalability is essential to enable effective notifications, thereby beginning to enable society to cautiously reopen.

These are the limitations that public health authorities are facing in developing apps. The apps that have launched to date have usually relied on asking users to opt in to sharing their location, revealing precise location data can reveal intimate information – where you’re going, where you’ve been, your character, interests, habits, religion, political inclinations.

So health departments began looking to Google and Apple to give them better access to the limited bluetooth APIs currently available. Remarkably, for two competitors who rarely cooperate, Apple and Google partnered on providing a new API that allows background sending and receiving of rotating Bluetooth identifiers. This gives apps access to new information that they couldn’t get before, but with limits to how it can be accessed or used. Only health departments will be approved to use this new API, to limit the sending of fake signals. Health departments are not sent information about individual users, as the app and device handles the communications locally.

Apple and Google did not create an app. It’s an API, which means a technical method for apps to get information off of the device. Public health authorities will create the apps that use this information, and be responsible for how it is communicated and how users receive alerts and what those alerts say. Public health authorities will have options to determine who should be alerted based on Bluetooth signal strength and time period of proximity to trigger an alert.

Now here is where it gets complicated. Some health departments want to use the new API and also collect location data, creating a risk that users can be identified. Some health departments want to create centralized databases to help them track and analyze the data collected. These health departments want Google and Apple to change their APIs and terms of use for the apps to allow collection of more personal data from users. But any changes made to the API or terms will affect users in every country in the world, creating risks that governments could misuse the API for law enforcement or for human rights abuses. Some privacy advocates think that even the current limited to Bluetooth apps can create a security risk. Some think that local democratic governments should set the privacy rules, not tech companies. Most average users will have a difficult time understanding the important differences between location and proximity. There is some truth to everyone one of these points, and no option that doesn’t have some downside.

But, if you are like me, and you want to protect those around you by being able to get and share these alerts, with minimal risk to privacy, health department apps that use the new API should be able to provide an additional tool in the effort to re-open society as we fight the pandemic.

For more privacy and data protection resources related to COVID-19, click here.

FPF Honors UC-Irvine/Lumos Labs Partnership with First-Ever Award for Research Data Stewardship

Click here to view the Call for Nominations for the 2021 FPF Award for Research Data Stewardship.

Click here to watch a recording of the 2020 FPF Award for Research Data Stewardship virtual awards event.

University of California Irvine (UCI) Professor of Cognitive Science Mark Steyvers and Lumos Labs – the parent company behind Lumosity, a popular online brain-training game website – are the winners of the first-ever Award for Research Data Stewardship from the Future of Privacy Forum (FPF). The award-winning collaboration between Professor Steyvers and Lumos Labs employed privacy techniques to transform data on user play into innovative cognitive science research. The annual FPF Award for Research Data Stewardship is supported by the Alfred P. Sloan Foundation, a not-for-profit grantmaking institution that supports high-quality, impartial scientific research and institutions. 

Lumosity supplied de-identified data on users’ response time and accuracy from one Lumosity game to researchers interested in identifying how people flexibly and efficiently adapt their behavior in response to changing contexts, otherwise known as task switching. In order to ensure that the data sharing project minimized potential privacy risks, both the parties took a number of steps, including: 

“Independent research on consumer data collected by private companies holds the keys to addressing many of the challenges facing our society today, but it must be done in a way that protects individual privacy,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “The COVID-19 pandemic has highlighted the urgency of promoting privacy-protective means of conducting research. That’s exactly what we’re doing by honoring Professor Steyvers and Lumos Labs as the winners of the Award for Research Data Stewardship.”

Nominees for the Award for Research Data Stewardship were judged based on their adherence to privacy protection in the data sharing process, the quality of the data handling process, and the company’s commitment to supporting academic research. Nominations were reviewed by a jury of experts comprised of academic and industry thought leaders, including representatives from FPF, leading foundations, academics, and industry leaders. Establishing data protections for corporate-academic data sharing is increasingly important as governments, healthcare institutions, and researchers aim to obtain and deploy consumer data to track the spread of the coronavirus, deliver emergency supplies, target travel restrictions and quarantines, and develop vaccines and cures. 

The partnership between Lumos Labs and Professor Steyvers was created through the Human Cognition Project (HCP), which is an online platform that was made to facilitate large-scale, collaborative research studies led by independent academic and clinical researchers. Over the last decade, the HCP has supported over 100 collaborators from universities and organizations, resulting in more than 40 peer-reviewed publications. 

“The Human Cognition Project as a whole, and the collaboration with Professor Steyvers in particular, demonstrates our commitment to sharing our data with academic researchers in a manner that respects individual privacy,” said Bob Schafer, General Manager of Lumos Labs. “Protecting the individual privacy of our users while using data and research to make the world a better place is at the heart of what we do at Lumos Labs.”

“The research collaboration with Lumos Labs enabled me to access the right data, without fear of compromising individual privacy,” said Mark Steyvers, Professor at University of California Irvine. “Through the Human Cognition Project, I was able to access large-scale data sets that enabled more extensive and precise investigations of human learning than is typically achievable conducting tests in a laboratory.” 

The partnership resulted in the publication of research in a leading journal that advances the research field’s understanding of an important cognitive function – task switching – and the impact of practice. The partnership has also provided resources and tools to the larger research community to promote transparency and reproducibility of results and has democratized this type of “big data” approach to the cognitive sciences. 

In addition to the award winners, FPF announced several nominated projects that earned honorable mentions, including: 

Learn more about the project, including best practices for future data sharing collaborations on the FPF website.

CONTACT

Nat Wood

[email protected]

(410) 507-7898