Remarks on Diversity and Inclusion by Michael McCullough

Last Thursday, June 18, 2020, Macy’s Chief Privacy Officer and FPF Advisory Board member Michael McCullough spoke about diversity and inclusion at WireWheel’s Spokes 2020 conference. 

The Question:

I’ve spoken to each of you about your views on diversity and equality, and about how that’s reflected in our privacy and data protection community. This has been an especially important time for that, but our community needs to help drive on these important issues. How do you think we can do that?

Response:

This is such a soul-crushing topic that we HAVE to tackle and keep tackling and keep talking about. People of conscience and goodwill MUST constantly strive for fairness and egalitarianism in all aspects of our society.

This is /// a painful time for me and so many others… and it’s hard to talk about – certainly in this setting. But I’m going to do it because it matters. And Justin, I appreciate your candor, openness and clear commitment to having these conversations.///

I get asked a lot – personally and professionally: /// “What are you; I mean what’s your background?”/// I respond – generally – cheerfully enough. I am half black and half white. /// But none of that is even close to what I AM. That is the most uninteresting, insipid question ABOUT ME. /// I know most mean no harm – it may be of genuine interest. But it’s really more about the asker, a crutch to know what box to put me in – even if subconsciously. Even if that is not the case, it FEELS like that’s the case.

So…I am black and white. While more and more people today are a mix of some “races,” I grew up with the knowledge that my mere existence was unlawful in some quarters due to anti-miscegenation laws…and you may be surprised to know this repugnant language remained in state constitutions after Loving v Virginia — till 1987 in Mississippi; 1998 in South Carolina and 2000 in Alabama.

Being both black and white I KNOW how hard this topic is. For some white people, there’s guilt and facing up to one’s own prejudices and privileges that are uncomfortable /// or a semblance of shame for feeling they’ve not done enough to be part of the solution, /// and dozens of other discomfiting sensations.

For many black folks, there’s anguish, historical… living history and immediate pain. There’s rage. Productive, real conversations are hard when there is existential pain. And sometimes, at least for me, those conversations feel like bowling for soup and a poultice for someone else’s wound that’s doing just enough for an entry in the CSR (Corporate Sustainability Report). I myself, and so many folks I have talked to, are self-policing on how to even talk about racism and white supremacy (especially in a professional/corporate environment): /// there’s hyper awareness ‘to be measured’ so as not to appear strident and feed into a stereotype (or be cast as too emotive for business). There’s personal risk in this talk.

So, this is my background to answer the question, “how do we think we can drive diversity and inclusion?”

We all know this is an institutional, systemic problem. Structural discrimination doesn’t require conscious racists or sexists. We all know there is no silver bullet. All we can do is start tearing down those institutions, this increasingly colonizing white supremacy sexist archipelago…one brick at a time.

Business efforts have to move from compensatory to reparative. Strategies for Diversity & Inclusion – commitments to reevaluate hiring practices, ensuring diversity in supply chain and vendors — are compensatory. As the adage goes “culture eats strategy for breakfast.” We need to planfully invest in building cultures of fairness and equity.

We are all getting the emails and messaging from companies stating their commitment to optimize diversity and inclusion and their strategies to build an inclusive environment. 130 plus years late – Aunt Jemima is being retired, as are Uncle Ben and Ms. Butterworth. Seriously! It’s 2020. These are major companies. I know they have Equal Opportunity policies and pro-diversity programs…. Something is wrong. Something is very wrong.

Then you have Ben and Jerry’s statement. /// I am shocked and disappointed that their statement entitled “We Must Dismantle White Supremacy: Silence Is NOT An Option” was so shocking. THAT stake in the ground is REPARATIVE. That is a culture bomb. We can no longer just be pro-diversity; pro-fairness; pro-equity; pro-black; pro-women; pro-Jewish or Sikh — we have to become culturally ANTI-RACIST; ANTI-SEXIST; ANTI-DISCRIMINATORY. That semantic difference matters — here’s why:

Relevant to our Worlds and what we control… We are very good at measuring and managing risk. There’s always a balance and certain tolerances – a calculus. But the “purpose” of risk management ultimately is economic and — in quotations “fiduciary.” Within that calculus we are less good at recognizing and accounting for “harms.” /// Bias harms are hard to tease out… and sometimes it is hard to get people to understand why bias harms are bad and should be cured; not managed… this is increasingly true when the “harms” potentially affect “only” small groups or are individualized, which is relevant as we increasingly pursue meaningful personalization. Risk management, we are good at the big stuff; less good at the small stuff. We need to be hyper aware of how we group and categorize “others.” Mere categorization can lead to harm…even if unintentional…(“WHAT ARE YOU, I mean what’s your background”) and especially when profitable, because “racism is productive”. In the obvious and clearly egregious cases, Aunt Jemima and Uncle Ben’s are profitable. /// So, what to do?

We have all had the rousing IT exec that makes a “zero defect environment” a MISSION. Is it achievable? No…. But it gives purpose beyond the traffic-lighting we can get caught up in in the day-to-day. Likewise, we can set a standard as “zero discrimination/zero harm” (zero defect) in our data practices. It gives mission and purpose to what otherwise is simply effective management. This mission can be supported operationally by assurance activities like adding, maintaining and appropriately updating equity analysis for code audits.

Many of the other things we can do have been talked about for years.

Demanding not just diversity on boards, but people with a demonstrated commitment to fairness and equity as a mission. And I mean women, I mean people of color, I mean trans and non-binary people. The co-founder of Reddit stepped down from its board and asked to be replaced by a person of color after recognizing his own white privilege that he had come to recognize due to his marriage to Serena Williams and after having bi-racial children.

Commit to a diverse pipeline and curate talent (so don’t just do executive recruiting at schools, invest in, co-design the programs that teach and build zero discrimination coding and design), /// measure it and seek feedback internally and externally. Defang an inarguably unfair and institutionally white supremacist carceral system by promoting programs and focusing on giving formerly incarcerated a hand up. Go beyond including diverse images in your spaces; seek out and ensure diverse image makers and story tellers are contributing to spaces. Give time, dollars and people to groups that are challenging status quo approaches to tech and be partners in experimentation. Challenge filter bubbles in your own organizations — I’ve seen the breakrooms and lunch halls and all hands meetings. As leaders, seek diversity in your mentorship circle; reverse mentor with diverse people.

We, specifically, have an opportunity as a community NOT to further export existing bias, structural racism and sexism into Code, /// and to begin unwinding and righting that ship, today. This is a singular moment to do that.

I believe that people, especially in this community, are overwhelmingly good and fair – we choose careers that protect people. /// But complacency in the face of complexity and difficulty, no matter how subdermal, is not an option /// – unpacking the non-obvious and finding solutions for the complex and difficult is what we do! /// This (moment; this need) will not pass – we have to reframe and reshape our corporate cultures; we have to be more than allies, but partners in liberation, fairness and equity for all.

Now, I believe deeply in free speech. When I became a Marine, I took an oath to protect and defend the Constitution. I would fight and die to protect 1st Amendment speech I find abhorrent. But racists and sexists should have no harbor in business and we have to do more than just be — PRO. We have to dig in, do the hard work and excise the business pro-ductivity of sexism and racism.

Finally, I just read a book with Future of Privacy Forum called “Race After Technology: Abolitionist Tools for the New Jim Code” by Princeton Sociologist Ruha Benjamin. She goes deep into the structures and encoding of white supremacy, the way that it infects CODE, and how “racism is productive.” It’s revelatory and worth the read to at least spark the imagination for “what can we do” (and “who do we want to BE”).

Postscript:

If you found my comments compelling in any way, I urge you to read Ruha’s book and the work of the many scholars illuminating the historical contexts, costs and caustic impacts of white supremacy and racism on our society today. I urge you to really listen to and co-imagine reshaping company cultures with your colleagues who bring life experience with racism and bias to the workplace. I urge all of us to reflect on our own roles and opportunities to harness this moment to drive critical change.

Michael ‘Mac’ McCullough is the Chief Privacy Officer and GRC Leader at Macy’s, a former Marine, and a member of the FPF Advisory Board. These remarks were delivered in his personal capacity and are shared here to mark Juneteenth. The remarks are lightly edited.

Juneteenth

FPF is closed for Juneteenth as our staff reflects on both the history and current state of racism in America.  Our social media accounts will be silent, other than to elevate voices that can help us learn and take action on issues such as equity and inclusion.

In that spirit, we would like to call attention to the work of Professor Ruha Benjamin and her book Race After Technology: Abolitionist Tools for the New Jim Code. The FPF Privacy Book Club was honored to learn from Professor Benjamin this week and we invite you to watch the video and order her book. We found it to be a thought-provoking commentary on how emerging technologies can reinforce white supremacy and deepen social inequity. We would also like to call attention to 15+ Books by Black Scholars the Tech Industry Needs to Read Now, posted by the Center for Critical Internet Inquiry at UCLA.

Supreme Court Rules that LGBTQ Employees Deserve Workplace Protections–More Progress is Needed to Combat Unfairness and Disparity

Authors: Katelyn Ringrose (Christopher Wolf Diversity Law Fellow) and Dr. Sara Jordan (Policy Counsel, Artificial Intelligence and Ethics)

Today’s Supreme Court ruling in Bostock v. Clayton County—clarifying that Title VII of the Civil Rights Act bans employment discrimination on the basis of sexual orientation and gender identity—is a major victory in the fight for LGBTQ civil rights. Title VII established the Equal Employment Opportunity Commission (EEOC), and bans discrimination on the basis of sex, race, color, national origin and religion by employers, schools, and trade unions involved in interstate commerce or those doing business with the federal government. Today’s 6-3 ruling aligns with Obama-era protections, including a 2014 executive order extending Title VII protections to LGBTQ individuals working for the federal contractors. 

In this post, we examine the impact of today’s decision, as well as (1) voluntary anti-discrimination efforts adopted by companies for activities not subject to federal protections; (2) helpful resources on the nexus of privacy, LGBTQ protections, and big data; and (3) the work FPF has done to identify and mitigate potential harms posed by automated decision-making. 

In Bostock, the Supreme Court determined that discrimination on the basis of sexual orientation or transgender status are forms of sex discrimination, holding: “Today, we must decide whether an employer can fire someone simply for being homosexual or transgender. The answer is clear. An employer who fires an individual for being homosexual or transgender fires that person for traits or actions it would not have questioned in members of a different sex. Sex plays a necessary and undisguisable role in the decision, exactly what Title VII forbids.” 

Bostock resolved the issue through analysis of three cases:

“Today is a great day for the LGBTQ community and LGBTQ workers across the nation. The United States Supreme Court decision could not have come at a better time given the current COVID-19 crisis and the protests taking place across the country. However, there still remains much work to be done, especially around the areas of data and surveillance tools. The well-documented potential for abuse and misuse of these tools by unregulated corporations as well as government and law enforcement agencies should give serious pause to anyone who values their privacy–especially members of communities like ours that have been historically marginalized and discriminated against,” says Carlos Gutierrez, Deputy Director & General Counsel of LGBT Tech. “Today’s decision will protect over 8 million LGBT workers from work discrimination based on their sexual orientation or gender identity. This is especially heartening given that 47% or 386,000 of LGBTQ health care workers, people on the frontlines of the COVID-19 battle, live in states that had no legal job discrimination protections.” 

We celebrate today’s win. However, it is now more critical than ever to address data-driven unfairness that remains legally permissible and harmful to the LGBTQ community. 

Bostock should also influence a range of anti-discrimination efforts. In recent years, many organizations have engaged in various efforts to combat discrimination even when their activities are not directly regulated by the Civil Rights Act. When implementing such anti-discrimination programs, organizations often look to the Act to identify protected classes and activities. Bostock provides clarity — organizations  should include sexual orientation and gender identity in the list of protected classes even if their activities wouldn’t otherwise be regulated under Title VII. 

Anti-Discrimination Efforts //

Title VII of the the Civil Rights Act has historically barred discrimination on the basis of sex, race, color, national origin and religion; the Civil Rights Act, including Title VII, is the starting point for anti-discrimination compliance programs. Even companies that do not have direct obligations under the Act (including ad platforms) have utilized the Act to guide their anti-discrimination efforts (see the Network Advertising Initiative’s Code of Conduct). According to the Human Rights Campaign, the number of Fortune 100 companies that have publicly pledged to non-discrimination employment policies on the basis of gender identity increased from 11% (gender identity) and 96% in 2003 (sexual orientation) to 97% and 98% respectively by 2018. 

We caution that simply not collecting or ignoring sensitive information will not always be a solution that ensures discrimination is avoided. Even without explicit data, proxy information can reveal sensitive information. Furthermore, in order to assess whether protected classed are treated unfairly, it will sometimes be important to collect information that can identify discrimination. While sensitive data collection has its benefits and risks, the lack of data available to researchers can mean that policymakers do not have the information necessary to understand disparities in enough depth to create responsive policy solutions.

Helpful Resources // 

Unfairness by Algorithm //

While discriminatory decisions made by a human are clearly regulated, the full range of potentially discriminatory decisions made by a computer are not yet well understood. Yet algorithmic harms may be similarly pernicious, as well as more difficult to identify or amenable to redress using available legal remedies.

In a 2017 Future of Privacy Forum report, Unfairness by Algorithm: Distilling the Harms of Automated Decision Making, we identified four types of harms—loss of opportunity, economic loss, social detriment, and loss of liberty—to depict the various spheres of life where automated decision-making can cause injury. The report recognizes that discriminatory decisions and resulting unfairness as determined by algorithms can lead to distinct collective and societal harms. For example, use of proxies, such as “gayborhood” ZIP codes in algorithms or resume clues regarding LGBTQ community activism, can lead to employment discrimination and result in the same differential access to job opportunities. 

As organizations commit to LGBTQ protections, an adherence to data protection and fairness principles are one way to battle systemic discrimination. These principles include ensuring fairness in automated decisions, enhancing individual control of personal information, and protecting people from inaccurate and biased data. 

Conclusion //

Today’s decision regarding workplace protections could not be more welcome, particularly now as data from the Human Rights Campaign shows that 17% of LGBTQ people and 22% of LGBTQ people of color have reported becoming unemployed as a result of COVID-19. However, the fight for inclusivity and equality does not stop with law and legislation. Further work is necessary to ensure that data-driven programs uncover and redress discrimination, rather than perpetuate it.

Associated Press: Schools debate whether to detail positive tests for athletes

In a recent article published by the Associated Press in The Washington Post and The New York Times, the Future of Privacy Forum warns of the privacy risks of sharing information about positive COVID-19 tests among students, particularly student athletes who have already returned to campus to prepare for the upcoming sports season. Read an excerpt below and see the full article here.

Athletic programs sometimes avoid making formal injury announcements, citing the Health Insurance Portability and Accountability Act (HIPAA) or the Family Educational Rights and Privacy Act (FERPA). Both are designed to protect the privacy of an individual’s health records. The U.S. Education Department issued guidelines in March that said a school shouldn’t disclose personal identifiable information from student education records to the media even if it determines a health or safety emergency exists.

But is merely revealing a number going to enable anyone to identify which athletes tested positive? That’s up for debate.

Amelia Vance is the director of youth and education privacy at the Future Privacy Forum, a think tank dedicated to data privacy issues. Vance believes releasing the number of positive tests effectively informs the public without sacrificing privacy.

Vance said disclosing the number of positive tests for a certain team would help notify members of the general public who may have come into contact with the athletes and could serve as a guide to those schools that haven’t welcomed students back to campus yet.

“If you’re saying six students tested positive or a student was exposed and therefore we’re having the whole team tested or things like that, that wouldn’t probably be traced back to an individual student,” Vance said. “Therefore, neither (FERPA or HIPAA) is going to apply, so any claim that privacy laws wouldn’t allow that disclosure would be disingenuous.

“The key there is to balance the public interest with the privacy of the students,” she said. “Most of the time, the information colleges and universities need to disclose don’t require the identification of a particular student to the press or general public.”

Read the article here.

TEN QUESTIONS ON AI RISK

Gauging the Liabilities of Artificial Intelligence ​Within​ Your Organization

Artificial intelligence and machine learning (AI/ML) generate significant value when used responsibly – and are the subject of growing investment for exactly these reasons. But AI/ML can also amplify organizations’ exposure to potential vulnerabilities, ranging from fairness and security issues to regulatory fines and reputational harm.

Many businesses are incorporating ever more machine-learning based models into their operations, both on the backend and in consumer facing contexts. For those companies who are not developers of these systems themselves, but who use these systems, they assume the responsibility of managing, overseeing, and controlling these algorithmically-based learning models, in many cases without extensive internal resources to meet the technical demands they incur.

General application toolkits for this challenge are not yet broadly available, and to help fill that gap while more technical support is developed, we have created a checklist focused on asking questions to carry out sufficient oversight for these systems. The questions in the attached checklist – “Ten Questions on AI Risk” – are meant to serve as an initial guide to gauging these risks, both during the build phase of AI/ML endeavors and beyond.

While there is no “one size fits all” answer for how to manage and monitor AI systems, these questions will hopefully provide a guide for companies using such models, allowing them to customize the questions and frame the answers in contexts specific to their own products, services, and internal operations. We hope to build on this start and offer additional, detailed resources for such organizations in the future.

The attached document was prepared by bnh.ai, a boutique law firm specializing in AI/ML analytics, in collaboration with the Future of Privacy Forum.

Polonetsky: Are the Online Programs Your Child’s School Uses Protecting Student Privacy? Some Things to Look For

Op-ed by Future of Privacy Forum CEO Jules Polonetsky published in The74.

As CEO of a global data protection nonprofit, I spend my workdays focused on helping policymakers and companies navigate new technologies and digital security concerns that have emerged in the wake of the COVID-19 pandemic.

Meanwhile, my children have adopted many of these technologies and are participating in online learning via Zoom and dozens of other platforms and apps — some of which have sparked serious concerns about student privacy and data security in the classroom.

These things are not contradictory. Here’s why.

Specific laws have been put in place to protect especially sensitive types of data. Your doctor uses services that safeguard your health information, and your bank relies on technology vendors that agree to comply with financial privacy laws.

Similarly, as the use of technology in the classroom skyrocketed in the past decade, federal and state laws were established that require stringent privacy protections for students.

To comply, many general consumer companies like Google, Apple and Microsoft developed education-specific versions of their platforms that include privacy protections that limit how they will use student information. School districts set up programs to screen ed tech software, even though few of the new laws came with funding.

But many of these federal and state protections apply only to companies whose products are designed for schools, or if schools have a privacy-protective contract with vendors. As schools rushed to provide distance learning during their coronavirus shutdowns, some of the tools adopted were not developed for educational environments, leaving children’s data at risk for sale or marketing uses.

If your child’s school has rolled out new technology platforms for online learning, there are important steps you can take to determine whether the tool includes adequate safeguards to protect student privacy. First, ask whether your school has vetted the company or has a contract in place that includes specific limitations on how student information can be used. Don’t hesitate to ask your child’s teacher to explain what data may be collected about your child and how it will be used — you have a right to this information.

Second, check to see if the company has signed the Student Privacy Pledge, which asks companies that provide technology services to schools to commit to a set of 10 legally binding obligations. These include not selling students’ personal information and not collecting or using students’ personal information beyond what is needed for the given educational purposes. More than 400 education technology companies have signed the pledge in recent years, so this can be a quick resource for identifying businesses that have demonstrated a commitment to ensuring that student data are kept private and secure.

Most importantly, take time to review each program’s privacy settings with your child and have an honest discussion about behavior online. Even the strictest privacy controls can’t always prevent a student from disrupting class by making racist remarks in the chat or sharing the link or log-in credentials. I hate to load another burden on parents who are trying to work from home, but making sure your kid isn’t an online troll is partly on you.

Now more than ever, we are relying on technology to keep in touch with work, school, and friends and family. It hasn’t been — and will never be — perfect. Policymakers can help schools ensure that the technologies they use meet privacy and security standards by providing the resources for schools to employ experts in those fields.

As we all try to adjust to this new normal, we should embrace technologies that can add value to students’ educational experience, enhance our ability to work remotely and help us stay connected. But we must first make sure the appropriate safeguards are in place so privacy and security don’t fall by the wayside.

Jules Polonetsky is the CEO of the Future of Privacy Forum, a Washington, D.C.-based nonprofit that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Previously, he served as chief privacy officer at AOL and DoubleClick, as New York City consumer affairs commissioner, as a New York state legislator, as a congressional staffer and as an attorney.

A Landmark Ruling in Brazil: Paving the Way for Considering Data Protection as an Autonomous Fundamental Right

Authors: Bruno Ricardo Bioni and Renato Leite Monteiro


A historic ruling of the Brazilian Supreme Court from May 07, 2020 describes the right to data protection as an autonomous right stemming from the Brazilian Constitution. By a significant majority, 10 votes to 1, the Court halted the effectiveness of the Presidential Executive Order (MP[1] 954/2020) that mandated telecom companies to share subscribers’ data (e.g., name, telephone number, address) of more than 200 hundred million individuals with the Brazilian Institute of Geography and Statistics (IBGE), the country’s agency responsible for performing census research. More important than the decision itself was its reasoning, which paves the way for recognizing the protection of personal data as a fundamental right, independent of the right to privacy, that already receives such recognition, in a similar fashion to the Charter of Fundamental Rights of the European Union. This article summarizes the main findings of the ruling. First, (1) it will provide background on the role of the Brazilian Supreme Court and the legal effects of the ruling. It will then look into (2) the facts of the case, (3) the main findings of the Court, to conclude with (4) an analysis of what comes next for the Brazilian data protection and privacy law. 

  1. The role of the Supreme Court and its rulings in the Brazilian legal system

The Brazilian legal system resembles the federative structure of the country. Each state has its own lower courts and appeal bodies. At the federal level, there are also lower courts and appeal bodies with specific scope, such as labor law, cases with international effects or lawsuits against federal agencies. On top of that there are superior courts also with specific scope, such as specific violations of federal laws. 

At the top of the system sits the Brazilian Supreme Court (STF), a constitutional court of eleven Justices appointed by the President. With few exceptions, only extraordinary cases which directly violate the federal constitution, e.g. violation of fundamental rights, reach the court and its rulings can have binding effects upon all other levels of the Brazilian legal system, depending on the type of proceeding or effects granted by the Justices. 

One particular type of proceeding, known as Direct Action of Unconstitutionality (ADI), can be filed directly to the Supreme Court without the need to be discussed on lower-level courts or any other court in cases in which laws or norms directly violate the constitution. Rulings from this particular type of proceedings have nationwide binding effects for all entities of the three branches of the government and for private organizations. This was the type of proceeding filed at STF to discuss data protection as an autonomous fundamental right. Its ruling, therefore, will have overall binding effects. 

  1. Facts of the case and proceedings

Due to social distancing measures adopted in response to the COVID-19 pandemic, staff of the Brazilian Institute of Geography and Statistics (IBGE) is not able to visit citizens in order to conduct face-to-face interviews for the statistical research necessary to perform the national census, known as National Household Sample Survey (PNAD). This is the context behind the Presidential Executive Order 954/2020 (MP), which aimed to allow the IBGE to carry out its census research through telephone interviews. In other words, the declared purpose was to avoid a “statistical blackout”. 

The telephone interviews presupposed to collect data regarding various socioeconomic characteristics, such as population, education, work, income, housing, social security, migration, fertility, health, nutrition, among other topics that can be included in the research according to the information needs of Brazil, e.g., behavior data on the context of the pandemic. These interviews have always been conducted in person on a sample of 70 thousand households that were a statistical representation of the Brazilian population. However, the MP mandated that subscribers data of 200 million telecom clients should be shared with IBGE to perform the census. At a first glance, the first question brought to the Court’s attention was: why is personal data of so many citizens necessary to achieve the same purpose that used to be achieved in the past with fewer information?

 The issue was raised by four different political parties and the national bar association that filed five ADI upon the STF to discuss violations to the fundamental right to privacy, expressly granted by Art. 5, X, of the Federal Constitution, and to the right to secrecy of communications data, provided by Art. 5, XII. In previous case-law, the Court struggled to recognize stored data, such as subscribers data, as data protected by Art. 5, XII. Long standing precedents only granted such type of protection to data in motion, like ongoing telephone calls or data being transmitted. Acknowledging the need to update this understanding in light of new technologies and the impact that the misuse of data can have upon individuals and the society, another argument was presented: the need to recognize the right to protect personal data as an autonomous fundamental right.    

When the ADIs were filed, Justice Rosa Weber, Rapporteur of the case, granted an injunction order suspending the effects of the MP until it was further discussed by all Justices, identifying probable violations of the aforementioned constitutional rights, also arguing that despite the pandemic we are living in there was no public interest to share personal data of 200 million people to undergo the desired public policy.  

The trial in front of the eleven Justices started on May 6, with the participation of the parties’ lawyers and of amici curiae, including Data Privacy Brasil. The organisation filed an amicus brief and it was represented for the oral statement by its Director Bruno Ricardo Bioni (a co-author of this article), who spoke at length about the singular position of the right to protection of personal data, its status as an autonomous fundamental right, the many vices of the executive order and the current data protection landscape in Brazil, including the fact that the Brazilian General Data Protection Law (LGPD) is still in vacatio legis. He also reminded the Court that the national data protection authority, which will provide guidance and enforcement, is yet to be established. The English translation of the oral statement is available online.

  1.  Main findings of the Court

Historically, the STF has ruled solely based on the right to privacy and, most importantly, following the legal rationale of this fundamental right by which only private/confidential data should be protected. In the case RE 601314, the Court ruled that the Brazilian Federal Revenue Office (the Brazilian IRS) could have access to financial data from Banks without a court order. According to the Court, the data would remain confidential since only IRS’s staff would have access to the data, and they should abide by their severe informational fiduciary duties. Moreover, such data did not comprise sensitive (‘intimate’) information about individuals (e.g. religion, family relationships) and, therefore, requests to access data from the IRS would not disproportionately interfere on the right to private life. In the case RE 1055941, the same reasoning was adopted in order to grant similar data access request powers to the Public Prosecutor’s Office.  

The new precedent of the Supreme Court is such a remarkable shift of how the Court has been analyzing privacy and data protection because it changes the focus from data that is secret to data that is attributed to persons and might impact their individual and collective lives, regardless of whether they are kept in secrecy or not. There is no more irrelevant data. Justice Carmen Lucia argued that the world that we used to live in, where personal data was freely available in telephone catalogs without substantial risks, does not exist anymore. In this sense, the Brazilian Federal Constitution protects not only confidential data, but all and any type of data that can be deemed as an attribute of the human personality. The best example is the habeas data, a procedural constitutional right by which any person has the right to know what information organizations hold about them, as it was argued by Justice Luix Fuz, recalling a precedent of the Supreme Court (Extraordinary Appeal 673.707). The habeas data constitutional right, originally conferred only against public organizations, is a reminiscence of dictatorial times in Brazil and throughout Latin America, when information about citizens was kept in secrecy by the government and used to suppress the population. This provision can now be used to retrieve personal data held by private entities, as long as the databases at issue are of public interest, such as consumer protection databases managed by data brokers.  

If the Brazilian Constitution’s core value is the protection of human dignity, the protection it affords should go beyond the right to privacy in order to address other harmful challenges to an individual’s existence, and not only harms to personality rights. Today, humanity can be hacked not only through granting access to data regarding our intimacy, or aspects of human personality that must be locked under seven keys. Recalling the work of philosopher Yuval Harari, Justice Gilmar Mendes argued that due to technological progress, any type of data use that covers an extension of our individuality can pose a threat to human rights and fundamental freedoms. For this reason Justice Fux argued that just like the Charter of Fundamental Rights of the EU, the Brazilian Constitution should recognize the protection of personal data as an autonomous fundamental right, distinct from the right to privacy.

The Cambridge Analytica scandal was recalled by Justice Luiz Fux to contextualize the collective dimension of data protection rights. By describing the facts surrounding that case, the Justice highlighted how the misuse of personal data can have an impact that surpasses the individual and can affect the very foundations of democracies and influence electoral outputs. “We know today that the dissemination of this data is very dangerous”, affirmed Justice Fux, reminiscing of his term as President of the Superior Electoral Court, when he analyzed a case concerning lack of transparency and knowledge of how personal data is collected and used for political purposes, which can lead to unattended consequences that violate individual and collective rights.

If the mere processing of personal data can pose risks over the rights of individuals, it should be backed by appropriate safeguards in order to manage potential harmful effects. Thus, protection of personal data should receive the same protection conferred by the due process clause. It is the type of protection that takes into consideration that there are risks to public liberties associated with the mere processing of data that is linked to a person, as argued by Justice Gilmar Mendes, quoting Julie Cohen and her work on informational due process. 

“The use of personal data is inevitably an interference over the personal sphere of someone”, highlighted Justice Luis Roberto Barroso. As a  consequence, it should be proportionate by verifying if: 

  1. a) the purpose of the processing is clearly specified and legitimate; 
  2. b) the amount of data collected is limited to what is strictly necessary in relation to the purposes for which they are being processed; 
  3. c) information security measures are adopted to avoid unauthorized third-party access. 

Such proportionality test was the conclusion made by Justice Luis Roberto Barroso, which is clearly crafted after the traditional principles of protection of personal data. For the first time, a Judge of the Supreme Court has provided a ruling with such strong wording supporting fair information practice principles as components of an autonomous constitutional right to data protection. 

In addition, another landmark case was initiated by the STF two weeks later, with two judges already publishing their opinions. The main question in this second case is whether Internet platforms could implement encryption technology to the level that it could limit and even avoid the access of law enforcement authorities to data stored or in transit necessary to investigate crimes. Again, the proceeding ADPF 403, known as Request of Non-Compliance with Basic Constitutional Principles (ADPF), that has the same effects of ADIs, discussed the violation of the fundamental rights to privacy and secrecy of communication data. “Digital Rights are Fundamental Rights”: with this strong affirmation, Justice Edson Facchin, the rapporteur, gave his vote ruling out any interpretation of the constitution that would allow a court order to provide exceptional access to end-to-end encrypted message content or that, by any other means, weakens the cryptographic protection of internet applications. Justice Rosa Weber highlighted in her ruling that “the past 3 decades have been an arms race of protection technologies and privacy violations. The law cannot be ignored and must preserve the balance between privacy and the proper functioning of the State”. She also stated that “cryptography, as a technological resource, has taken on special importance in the implementation of human rights”.  

The case is still under ongoing proceedings and pending the votes of the other 9 Justices. Nonetheless, the two opinions already published are a breakthrough and show a steep change in the perception and understanding of Brazil’s highest court towards privacy and data protection rights. 

  1. A look to the future: the Brazilian General Data Protection Law and the amendment to the Brazilian Constitution

Despite this historical ruling, Brazil still lacks an institutional infrastructure to supervise and enforce data protection rights. The National Data Protection Authority was created by the Brazilian General Data Protection Law (“LGPD”), but is yet to be established. LGPD was approved in 2018, with an initial adaptation period of 18 months, which was soon amended to be increased by 6 months, leaving the effective date to August 2020. In parallel, a proposal to amend the Federal Constitution aims to include the protection of personal data in the list of fundamental rights. The proposal was unanimously approved by the Senate and by a special parliamentary commission of the House of Representative. Now it needs to be approved by two-thirds of this house. 

Now, due to the COVID-19 pandemic, a new bill and another executive order aim to postpone the entering into force of the LGPD to 2021. The bill was already voted by both the Senate and the House of Representatives and it is now to Presidential confirmation. If ratified as it is, the new law would keep the effective date to August 2020. However, it would amend LGPD to allow penalties and enforcement actions only to August 2021. In parallel, a presidential executive order already amended LGPD to change the effective date to May 2021. Nevertheless, this order needs to be approved by the Congress until July this year, what is unlikely to happen due to disputes between the two branches. That said, we can end up not knowing until July when the law will be in effect, one month before its original and possible date. On top of that, the National Data Protection Authority (ANPD), created in Dezember 2018, is yet to be established. Therefore, we can end up in twilight zone with no knowledge what may take place.

What is remarkable is that until the bill to amend the constitution is not adopted, which may not happen in the near future due to political unrest, this ruling of the Brazilian Supreme Court already paves the way to recognize the right to data protection in practice. 

 

About the authors:

Bruno Ricardo Bioni is a PhD candidate at University of São Paulo School of Law. He was a study visitor at Council of Europe/CoE and at the European Data Protection Board/EDPB. Founder of Data Privacy Brasil; Contact: [email protected].

Renato Leite Monteiro is a PhD candidate at the University of São Paulo School of Law. He was a study visitor at Council of Europe and actively participated in the discussions that led to the Brazilian General Data Protection Law. Founder of Data Privacy Brazil; Contact: [email protected]

Data Privacy Brasil  is a non-governmental organization with two operational branches: Data Privacy Brasil School, which provides training services and privacy courses, and the Research Association Data Privacy Brasil, which  focuses  on the research of the interconnection between protection of personal data, technology and fundamental rights. Data Privacy Brasil aims to improve privacy and data protection capacity-building for organizations active in Brazil. 


[1] MP- Brazilian abbreviation for Provisional Measure which is a legal act in Brazil through which the President can enact laws for 60 days without approval by the National Congress.

Endgame Issues: New Brookings Report on Paths to Federal Privacy Legislation

Authors: Stacey Gray, Senior Counsel (US Legislation and Policymaker Education), Polly Sanderson, Policy Counsel

 

This afternoon, The Brookings Institution released a new report, Bridging the gaps: A path forward to federal privacy legislation, a comprehensive analysis of the most challenging obstacles to Congress passing a comprehensive federal privacy law. The report includes a detailed range of practical recommendations and options for legislative text, the result of work with a range of stakeholders to attempt to draft a consensus-driven model privacy bill that would bridge the gaps between sharply divided stakeholders (read the full legislative text of that effort here). 

Among the legislative options for issues that will have to be addressed to pass a federal privacy law, the report explores: endgame issues (including preemption and enforcement), hard issues (such as limits on processing of data, civil rights, and algorithmic decision-making), solvable issues (such as covered entities, data security, and organizational accountability), and implementation issues (such as notice, transparency, and effective dates). 

Below, we discuss how the Brookings report addresses the two “endgame issues,” enforcement and preemption, in a path towards federal privacy legislation. We agree that these are endgame issues given that neither is optional–both topics must be addressed in any federal privacy law–and because they are issues on which lawmakers on both sides of the aisle (and more broadly, industry and privacy advocates) remain the most deeply divided.

Enforcement

Any meaningful federal law must contain provisions for its enforcement. However, there is considerable disagreement regarding how a privacy law should be enforced. Enforcement mechanisms can vary widely, from agency enforcement (by the Federal Trade Commission or another federal agency), to state law enforcement (such as Attorneys General), to various kinds of private rights of action (by which individuals can challenge violations in court).

A number of Senate and House Democrats and privacy advocates are proponents of a federal private right of action (usually in addition to federal agency enforcement). Many privacy advocates observe that private litigation has played an important role in enforcing federal civil rights laws. They have also expressed concerns that a federal agency will not have sufficient resources, political will, or incentives to adequately enforce the law, for example, when a violation involves harm to only one or a few individuals. Read more from advocates:

In contrast, most tech and business groups, and many Republicans, have expressed support for the more centralized enforcement authority of the Federal Trade Commission. Typically, they observe that data privacy harms can be difficult to define and measure, and argue that centralized enforcement would provide needed clarity and legal certainty to businesses and consumers around a consistent national standard. Business stakeholders also tend to cite concerns over contingency-based class action litigation, including risks to small businesses and financial incentives for meritless litigation. Read more from tech and business groups:

The Brookings proposal suggests a potential compromise: a tiered and targeted private right of action. Recovery would typically be limited to “actual damages,” but would impose statutory damages of up to $1000 per day for “wilful or repeated violations.” Specified harms under the duty of care would not be subject to a heightened standard, while other violations would require individuals to show a “knowing or reckless” violation to sue. Technical violations only give rise to suit if they were “wilful or repeated.” Importantly, potential plaintiffs would also be required to exercise a “right of recourse” before bringing a suit. This approach would give covered entities an opportunity to receive notice and cure the violation, and individuals a way to address privacy disputes outside the courts. 

Preemption

When Congress passes a federal privacy law, lawmakers must decide to what extent it will “preempt,” or nullify, current and future state and local privacy and data protection laws. Given the nature of modern data flows, most companies see clear benefit in uniform obligations across state lines and for consumers to have a core set of common rights. However, some argue that privacy can also have a uniquely local character, and note that state legislators have been at the forefront of many novel privacy protections, including in response to crises or rapid technological changes. 

The Brookings report proposes several potential compromises to attempt to bridge the gaps between the broad preemption in Senator Wicker (R-MS)’s staff discussion draft and the narrow preemption provisions in most Democratic bills, including Senator Cantwell’s Consumer Online Privacy Rights Act (COPRA). The report suggests preempting state laws only where they interfere with federal provisions specifically related to data collection, processing, transfers, and security. It also recommends that the Federal Trade Commission be authorized to preempt any state law inconsistent with the federal standard, and suggests a limited eight-year sunset clause on preemption.

Looking Ahead

We are optimistic that this new report from The Brookings Institution will be a source of thoughtful debate, and help stakeholders advance the conversation about these contentious issues. In addition to the difficult “endgame” issues of enforcement and preemption, the report identifies a detailed and wide range of other solvable issues having to do with implementation or operational issues on which there is broad agreement. As a result, it provides a highly practical starting point for stakeholders to engage around key issues that will need consensus.

The report observes that its recommendations “will not satisfy maxialists on either side of the debate” but that it may address “legitimate interests of divergent stakeholders.” Indeed, both sides have something to gain from striking a balance – and we agree that “both have something to lose from continued inaction and stalemate.”

Thermal Imaging as Pandemic Exit Strategy: Limitations, Use Cases and Privacy Implications

Authors: Hannah Schaller, Gabriela Zanfir-Fortuna, and Rachele Hendricks-Sturrup


Around the world, governments, companies, and other entities are either using or planning to rely on thermal imaging as an integral part of their strategy to reopen economies. The announced purpose of using this technology is to detect potential cases of COVID-19 and filter out individuals in public spaces who are suspected of suffering from the virus. Experts agree that the technology cannot directly identify COVID-19. Instead, it detects heightened temperature that may be due to a fever, one of the most common symptoms of the disease. Heightened temperature can also indicate a fever resulting from a non-COVID-19 illness or non-viral causes such as pregnancy, menopause, or inflammation. Not all COVID-19 patients experience heightened temperature, and individuals routinely reduce their temperatures through the use of common medication.

In this post, we (1) map out the leading technologies and products used for thermal imaging, (2) provide an overview of the use cases currently being considered for the use of thermal imaging, (3) review the key technical limitations of thermal scanning as described in scientific literature, (4) summarize the chief concerns articulated by privacy and civil rights advocates, and finally, (5) provide an in depth overview of regulatory guidance from the US, Europe and Singapore regarding thermal imaging and temperature measurement as part of the deconfinement responses, before reaching (6) conclusions.

Our main conclusions:

  1. Overview of Technologies Being Used

FLIR Systems, Inc., one of the largest makers of thermal imaging cameras, explains that the cameras detect infrared radiation and measure the surface temperatures of people and objects. They do this by measuring the temperature differences between objects. Thermal cameras can be used to sense elevated skin temperature (EST), a proxy for core body temperature, and thus identify people who may have a fever. This allows the cameras used to single out people with EST for further screening with precise tools similar to  an oral thermometer. As FLIR acknowledges, thermal cameras are not a replacement for such devices, which directly measure core body temperature.

FLIR explains that thermal cameras need to be calibrated in a lab, and be periodically recalibrated to ensure that their temperature readings match the actual temperatures of people and objects. FLIR recommends having cameras recalibrated annually. In addition to reading absolute temperatures, FLIR’s cameras have a ‘screening’ mode, where people’s temperatures are measured relative to a sampled average temperature (SAT) value. This value is an average of the temperatures of ten randomly chosen people at the testing location. The camera user then sets an “alarm temperature” at 1°C to 3°C greater than the SAT value, and the camera displays an alarm when it detects someone in this zone. As FLIR notes, a SAT value can be more accurate than absolute temperatures because it accounts for “many potential variations during screening throughout the day, including fluctuations in average person temperatures due to natural environmental changes, like ambient temperature changes.” 

The accuracy of a thermal camera’s reading is affected by several factors, including the camera’s distance from the target. FLIR suggests that the camera should be as close to the target as possible, and telephoto lenses might be appropriate for longer-range readings. The camera’s functions and settings can affect its accuracy as well and need to be appropriately configured.

Thermal imaging can be paired with various other technologies. Draganfly, Inc., a Canadian drone company, has mounted thermal sensors on what it calls ‘pandemic drones’ for broad-scale aerial surveillance. The drones are also equipped with computer vision that can sense heart and respiratory rate, detect when someone coughs or sneezes, and measure how far apart people are from one another to enforce social distancing. Reportedly, it can do all of this through a single camera from a distance of 160 feet. In a video interview, Draganfly’s CEO stated that the sensors can even distinguish between different kinds of coughing.

Thermal imaging has also been paired with facial recognition by some companies based in China, including SenseTime and Megvii. Chinese AI startup Rokid has mounted a camera on a pair of glasses that uses facial recognition and thermal imaging to identify people, measure their temperature, and record this information. In Thailand, thermal imaging has been integrated into the existing biometric-based border control system, which identifies travelers using fingerprint scans and facial recognition.

While many US locations still perform temperature screenings with handheld thermometers, interest in thermal imaging cameras is growing rapidly. Several thermal imaging companies claim to have sold thousands of units to US customers since the COVID-19 outbreak began. Thermal cameras are appealing as an exit strategy solution due to some promised advantages over handheld thermometers. They claim to detect the temperatures of many people at once, whereas handheld thermometers can only test one person at a time. They also claim to measure temperatures from a distance as people move. Theoretically, these abilities would lessen or eliminate the need for people to wait in line to have their temperatures taken, which in turn also reduces the risk of COVID-19 transmission. All of these promises should be weighed together with the limitations of the technology along with the implications to privacy and other civil rights. 

  1. Current Use Cases

Airports. Airports across the world are using thermal cameras to screen travelers. Some countries, including China, Japan, South Korea, Singapore, Canada, and India, began using them in 2002-2003 (in response to SARS) or 2009 (in response to swine flu) and continue to use them in response to COVID-19. Some airports in these countries have installed additional cameras in recent months. Other countries, like Italy, have recently begun using thermal imaging at airports for the first time. Rome’s Fiumicino Airport is testing helmets equipped with thermal cameras, worn by its staff, to detect travelers’ temperatures. Other countries have resisted this technology. In the UK, Public Health England decided that British airports will not use thermal cameras, although the CEO of Heathrow Airport was in favor of doing so. US airports who are not using thermal cameras, are evaluating the possibility of doing so. Instead, screening procedures include taking temperatures with a handheld thermometer, looking for signs of illness, and requiring travelers to fill out a questionnaire. In response to plans of the US Department of Homeland Security to check commercial airline passengers’ temperatures, a member of the Privacy and Civil Liberties Oversight Board is pressing the agency for more details, warning the global pandemic “is not a hall pass to disregard the privacy and civil liberties of the traveling public.”  

Transportation. Some Chinese cities are equipping public transportation centers with cameras that combine thermal imaging and facial recognition. Wuhan Metro transport hubs are being equipped with cameras from Guide Infrared, and Beijing railway stations are adding cameras from Baudi and Megvii. In addition, a Chinese limousine service has installed thermal cameras in its vehicles to monitor drivers and passengers. In Dubai, police are using thermal imaging and facial recognition to monitor public transport users via cameras mounted on ‘smart helmets.’

Employee Screening. Companies are using thermal cameras to screen employees for fevers. This is done broadly in China and South Korea at entrances to offices and major buildings, often using combined thermal imaging and facial recognition. Elsewhere, thermal cameras without facial recognition are increasingly used. For example, Brazilian mining company Vale SA is installing thermal cameras to screen employees entering buildings, mines, and other areas. Indian Railways installed a thermal camera from FLIR at an office entrance, among other COVID-19 mitigation measures.

Some US companies and organizations are also screening employees with thermal cameras, including Tyson Foods; Amazon, which is screening warehouse workers; and the VA Medical Center in Manchester, New Hampshire, which is scanning staff and patients. It appears that most US companies that have begun screening employees for fevers, like Walmart and Home Depot, are using hand-held thermometers

Public Facing Offices. As stated above, thermal cameras read skin temperature, and are not a substitute for temperature-taking methods that measure core body temperature. However, some locations are making decisions based solely on thermal camera readings. For example, in Brasov, Romania, a city office installed thermal cameras at its entrances, automatically denying entrance to  anyone with a temperature of over 38°C. Because thermal camera readings do not always match core body temperatures, there is a risk that people without fevers will be unfairly impacted by reliance solely on thermal camera temperature readings.

Customer and Patient Screening. Thermal cameras are growing in popularity among US businesses and hospitals as a way to screen customers and patients, respectively. A grocery store chain in the Atlanta, Georgia area is screening incoming customers using FLIR cameras. Customers with temperatures of 100.4°F or higher are pulled aside by an employee and given a flyer asking them to leave, in an attempt to handle the situation discreetly. Wynn Resorts in Las Vegas plans to screen guests at its properties and require anyone who registers a temperature of 100.4°F or higher to leave. Texas businesses and hospitals are also starting to adopt thermal cameras. Hospitals elsewhere are following this trend – for example, Tampa General Hospital in Florida now screens patients with a thermal camera system made by care.ai, a healthcare technology company.

Public Surveillance. Thermal cameras allow authorities and businesses to screen large numbers of people in real-time, making them ideal for monitoring public areas. In China, thermal cameras with facial recognition surveil many public places; some systems can even notify police of people who are not wearing masks. In several cities in Zhejiang province, police and other officials are wearing Rokid’s thermal glasses to monitor people in public spaces like parks and roadways. These glasses combine thermal imaging with facial recognition, as they also record photos and videos. Thermal sensing drones are also being used in numerous cities.

Use of thermal imaging has grown outside of Asia, too. In India, a thermal camera provider is considering installing its cameras around Delhi, both in public spaces and in businesses. Huawei has also offered thermal cameras as a solution to monitoring COVID-19 in India. Outside of Asia, in New Zealand, thermal cameras, originally developed for pest control, are being reworked to monitor for fevers in public places and are in use by some businesses. Police, in some areas of the UK, use thermal cameras to spot people breaking social distancing orders at night. The Quassim region of Saudi Arabia is monitoring the public with drones carrying thermal cameras.

It is uncommon in the US to use thermal cameras as a tool for public surveillance. However, police in Westport, Connecticut tested a Draganfly ‘pandemic drone’ to be used to measure temperatures and enforce social distancing, back in April. Westport police use drones for other purposes, but not for this kind of mass-monitoring. The program was quickly dropped when it was met with criticism by the public and the American Civil Liberties Union (ACLU) of Connecticut, which criticized the effectiveness of the drones and raised privacy concerns. Other cities that were also interested in Draganfly’s drones, like Los Angeles, Boston, and New York, may still be considering them.

In addition to drones, some US entities are reportedly considering Rokid’s thermal glasses. The company is discussing the sale of its glasses with various US businesses, hospitals, and law enforcement departments.

  1.   Technical and Other Limitations

In general, thermal imaging is used in regulated clinical settings with validated clinical protocols to diagnose or detect illness and triage patients. The use of specific thermal imaging devices to detect possible cases of COVID-19 or for other medical purposes, in general, requires US Food and Drug Administration (FDA) approval. In such cases, thermal imaging technologies would be considered by the FDA as medical devices. Concerning labeling for thermal imaging technologies, the FDA stated:

“When evaluating whether these products are intended for a medical purpose, among other considerations, FDA will consider whether: 

1) They are labeled or otherwise intended for use by a health care professional; 

2) They are labeled or otherwise for use in a health care facility or environment; and 

3) They are labeled for an intended use that meets the definition of a device, e.g., body temperature measurement for diagnostic purposes, including such use in non-medical environments (e.g., airports).”

The use of thermal imaging in non-medical environments, however, warrants the necessity to explore the technical limitations of using such technologies in high-traffic areas, like airports, for non-diagnostic yet medical purposes. 

The fact that fever or body temperature alone can be a poor indicator of viral infection or contagion complicates the validity of thermal scanning for COVID-19 surveillance. If not most of the time, fevers can be masked with over-the-counter or unrestricted treatments, such as non-steroidal anti-inflammatory drugs, that can alleviate signs of fever for up to four to six hours depending on the severity or stage of the condition. Non-infectious conditions, such as pregnancy, menopause, or inflammation, however, might also cause elevated temperature, which can render thermal scanning as highly sensitive but non-specific to any particular condition. For example, according to Johns Hopkins Medicine, hot flashes are the most common symptom of menopause, affecting 75% of all women in this stage, for up to two years. Also, confounding factors like inconsistencies or variations in viral response or strain can render thermal scanning insufficient for detecting specific types of infectious diseases like respiratory viruses.

Scientific literature suggests that reliance on public thermal scanning to detect fever is concerning from an ethical standpoint, and, given its technical limitations, is not a reliable disease surveillance strategy to support phased reopening. In a study evaluating the utility of thermal scanning in airports, researchers concluded that because the technology would be applied in a public setting unbeknownst to public passengers, controversy and complexity around matters of opt-in/out consent are inevitable. Studies have shown that thermal imaging technology can reasonably correlate core temperatures with influenza infection. However, its technical limitations render it insufficient to detect fever in settings where several individuals are moving in different directions at once, like in public settings with random, high pedestrian traffic. FDA labeling requirements are consistent with this limitation, mandating that labels acknowledge that the technology “should be used to measure only one subject’s temperature at a time.” Therefore, thermal scanning protocols would likely require structured, individual-level assessments along with non-compulsory and non-coercive (freely given) consent to be somewhat successful and feasible within public health surveillance settings that adhere to ethical standards of personal autonomy.

  1. Privacy and Civil Rights Advocates’ Concerns

Privacy and civil rights advocates in the US have raised concerns about the potential consequences of using thermal imaging such as discrimination and loss of opportunity. Since thermal imaging cannot distinguish fevers caused by COVID-19 from other causes of high body temperature, equating raised body temperature with the virus would lead to many people falsely being identified as COVID-19 risks and facing the associated downsides of that label, including discrimination. The Electronic Frontier Foundation (EFF) points out that thermal cameras are surveillance devices that can “chill free expression, movement, and association; aid in targeting harassment and over-policing of vulnerable populations; and open the door to facial recognition.” In light of the questionable effectiveness of thermal cameras, EFF cautions against using them to monitor the public at large. The ACLU of Connecticut criticized Draganfly’s drones as “privacy-invading,” and urged officials only to adopt surveillance measures against the spread of COVID-19 that are “advocated for public health professionals and restricted solely for public health use.” These concerns are also expressed in the context of fears that surveillance technologies adopted during the pandemic may remain long after their original purpose has been fulfilled.

In a recent White Paper on “Temperature Screening and Civil Liberties during an Epidemic,” the ACLU recommended that temperature screening “should not be deployed unless public health experts say that it is a worthwhile measure notwithstanding the technology’s problems. To the extent feasible, experts should gather data about the effectiveness of such checks to determine if the tradeoffs are worth it.” The ACLU further recommended that people should know when their temperature is going to be taken and  that “standoff thermal cameras should not be used.” In addition, “no action concerning an individual should be taken based on a high reading from a remote temperature screening device unless it is confirmed by a reading from a properly operated clinical grade device, and provisions should be made for those with fevers not related to infectious illness.”

  1. Regulatory Responses

In the US, regulatory responses to taking one’s temperature in non-healthcare services scenarios are primarily stemming from anti-discrimination statutory obligations. The Equal Employment Opportunity Commission (EEOC) recently revised its rules regarding the Americans With Disabilities Act in the context of a pandemic. The revisions allow employers to take employees’ temperatures during COVID-19. They also allow employers to take job candidates’ temperatures after making a conditional offer, as well as withdraw a job offer if a newly hired employee is diagnosed with COVID-19. However, the guidance does not distinguish between manual temperature checks and thermal scanning cameras. 

This distinction drives many of the regulatory responses in Europe, where multiple Data Protection Authorities (DPAs) have published guidance on checking temperatures of employees, but also of customers or pedestrians. One of the regulators that draws a clear distinction between the two types of measuring temperature is the CNIL (the French DPA). According to the CNIL, “the mere verification of temperature through a manual thermometer (such as, for example, the contactless thermometers using infrared) at the entrance of a place, without any trace being recorded, and without any other operation being effectuated (such as taking notes of the temperature, adding other information etc.), does not fall under data protection law”. 

However, things fundamentally change when thermal scanning through cameras is involved. In this sense, the CNIL issued a prohibition: “According to the law (in particular Article 9 [of the General Data Protection Regulation] GDPR), and in the absence of a law that expressly provides this possibility, it is forbidden for employers to: 

The prohibition of these two types of temperature measurement echoes guidance issued by the French Ministry of Labor in its “National Protocol for Deconfinement Measures.” Before including a prohibition for temperature measurement with the use of cameras, the Protocol relies on the findings of the High Council for Public Health that the COVID-19 infection may be asymptomatic or barely symptomatic, and that “fever is not always present in patients.” It also recalls that a person with COVID-19 can be infectious “up to 2 days before the onset of clinical signs,” and that “bypass strategies to this control are possible by taking antipyretics.” The Ministry of Labor concludes that “taking temperature to single out a person possibly infected would be falsely reassuring, with a non-negligible risk of missing infected persons.” 

The Spanish DPA takes the position that taking the temperatures of individuals to determine their ability to enter the workplace, commercial spaces, educational institutions, or other establishments, amounts to processing of personal data without making any distinction in its guidance between manually held thermometers and thermal imaging. It seems to focus on the purposes for which individual measurement of temperature is used when making this assessment. The Spanish DPA highlights in its detailed guidance that “this processing of personal data amounts to a particularly severe interference in the rights of those affected. On one hand, because it affects data related to health, not only because the value of the body temperature is  data related to health by itself, but also because, as a consequence of that value it is assumed that a person suffers or not from a disease, in this case a coronavirus infection.” 

The Spanish DPA also notes that the consequences of a possible negation to enter a specific space may have a significant effect on the person concerned. Therefore, it urges organizations to consider, among other measures, properly informing workers, visitors or clients about temperature monitoring. They should also allow those individuals with a higher than normal temperature to object to a decision that impedes their access in a specific place in front of personnel who are qualified to assess possible alternative reasons for the high temperature and can allow access where justified. It is also relevant to note that, when it comes to lawful grounds for processing, the Spanish DPA does not deem consent and legitimate interests as appropriate lawful grounds. The processing needs to be based either in a legal obligation or in the interest of public health, ensuring that the additional conditions required by these two lawful grounds are met.

The Italian DPA (Garante) takes the position that taking one’s “body temperature in real time, when associated with the data subject’s identity, is an instance of processing personal data.” As a consequence of this fact, the DPA states that “it is not permitted to record the data relating to the body temperature found; conversely, it is permitted to record the fact that the threshold set out in the law is exceeded, and recording is also permitted whenever it is necessary to document the reasons for refusing access to the workplace.” This rule applies in an employment context. Where the body temperature of customers or occasional visitors is checked, “it is not, as a rule, necessary to record the information on the reason for refusing access, even if the temperature is above the threshold indicated in the emergency legislation.” 

It is important to highlight here that in the case of Italy, there is special legislation adopted for managing the COVID-19 pandemic that mandates temperature taking by “an employer whose activities are not suspended (during the lockdown – n.)” to comply with the measures for the containment and management of the epidemiological emergency. This special legislation acts as a lawful ground for processing. Once the legislation expires or becomes obsolete, taking the temperature of employees or other individuals entering a workplace will likely remain without a lawful ground. According to the Garante, another instance where special emergency legislation allows for temperature measurement is in the case of airport passengers. It should also be noted that neither the Garante’s guidance, nor the special legislation mentioned above make a distinction between manual temperature taking and the use of thermal cameras. 

By contrast, the Belgian DPA takes the position that “the mere capturing of temperature” is not a processing of personal data, without distinguishing between manual temperature taking and the use of thermal cameras. Accordingly, the DPA issued very brief guidance stating that “if taking the temperature is not accompanied by recording it somewhere or by another type of processing, the GDPR is not applicable.” It nonetheless reminds employers that all the measures they implement must be in accordance with labor law as well as the guidance of competent authorities. 

The Dutch DPA warned controllers that want to measure the temperature of employees or visitors about the uncertainty of detecting COVID-19 by merely detecting a fever. It also advised that “taking temperatures is not simply allowed. Usually you use this to process medical data. And this falls under the GDPR.” According to the Dutch DPA, “the GDPR applies in this situation because you not only measure someone’s temperature, but  you also do something with this medical information. After all, you don’t measure for nothing. Your goal is to give or deny someone access. To this end, this person’s temperature usually has to be passed on or recorded somewhere so that, for example, a gate can open to let someone in.” In further guidance on the  question of whether temperature measurement falls under the GDPR, the DPA explained that “a person’s temperature is personal data. (…) The results (of temperature measurement – n.) will often have to be passed on and registered somewhere to allow or deny someone access. Systems in which gates open, which give a green light or which do something automated on the basis of the measurement data are also protected by the GDPR.” The DPA also states that even when the GDPR is not applicable in those cases where the temperature is merely read with no further action, a breach of the right to privacy or of other fundamental rights might be at issue: “The protection of other fundamental rights, such as the integrity of the body, may also be expressly at stake. Depending  on how it is set up, only measuring temperature can indeed be illegal.”

The UK Information Commissioner’s Office (ICO) warns organizations that want to deploy temperature checks or thermal cameras on site that “when considering the use of more intrusive technologies, especially for capturing health information, you need to give specific thought to the purpose and context of its use and be able to make the case for using it. Any monitoring of employees needs to be necessary and proportionate, and in keeping with their reasonable expectations.” However, it does seem to allow such practices in principle, but only after a Data Protection Impact Assessment is conducted. The ICO states that it worked with the Surveillance Camera Commissioner to update a DPIA template for uses of thermal cameras. “This will assist you thinking before considering the use of thermal cameras or other surveillance,” the ICO adds. 

The Czech DPA also adopted specific guidance for the use of thermal cameras and  temperature screening, taking the position that data protection law is applicable only when “the employer intends to record the performed measurements and further work with data related to high body temperature in conjunction with other data enabling the identification of the person whose body temperature is being taken.” As opposed to the Spanish DPA, which found that legitimate interests cannot be a lawful ground for processing such data, the Czech DPA suggests that employers can process the temperature of their employees on the basis of legitimate interests, paired with one of the acceptable uses for processing health data under Article 9(2). The DPA further advises that the necessity of such measures needs to be continuously assessed and warns that “measures which may be considered necessary in an emergency situation will be unreasonable once the situation returns to normal.”

In Germany, the Data Protection Commissioner of Saarland has already started an investigation into a supermarket which installed thermal cameras to select customers with normal temperatures for its premises, after declaring to the media that “the filming was used to collect personal data, including health data, in order to identify a potential infected person,” and this measure breached the GDPR and the right to informational self-determination. According to media reports, the supermarket decided to suspend the thermal scanning measure. In addition, the DPA of Rhineland-Phalz notes in official guidance that “the mere fact that an increased body temperature is recorded does not automatically lead to the conclusion that COVID-19 is present. Conversely, an already existing coronavirus disease does not necessarily have to be identified by an increased body temperature. Therefore, the suitability of the body temperature measurement is in doubt.” The DPA suggests that alternative measures should be implemented by employers to comply with their duty of care towards the health of employees, such as working from home whenever possible or encouraging employees to seek medical advice at the first signs of disease. The DPA of Hamburg is more precise and clearly states that “neither the use of thermal imaging cameras nor digital fiber thermometers to determine symptoms of illness is permitted” to screen persons to enter shops or other facilities. This can only be offered to individuals as a “voluntary service.” 

It seems that all DPAs which issued guidance on this matter have determined that  thermal scanning and temperature management are particularly intrusive measures. But their responses vary, from a clear prohibition to use thermal cameras for triaging people (CNIL, Hamburg DPA), to allowing thermal scanning in a quite restricted way (Spanish DPA), to possibly allowing video thermal scanning by default as long as a DPIA is conducted (UK ICO), to making a point about hand-handled temperature measurement as not falling under data protection law (Dutch DPA, Belgian DPA, Czech DPA, CNIL), to not making any differentiation between hand-handled temperature measurement and video thermal scanning when allowing such measures (Italian DPA). The European Data Protection Board (EDPB) has not yet issued specific guidance on the use of thermal cameras or, generally, on the measurement of temperature. Given the diversity in approaches taken by European DPAs, it may be necessary for the EDPB to provide harmonized guidance. 

Elsewhere in the world, the Singaporean Personal Data Protection Commission advises organizations that “where possible, deploy solutions that do not collect personal data. For instance, your organisation may deploy temperature scanners to check visitors’ temperature without recording their temperature readings, or crowd management solutions that only detect or measure distances between human figures without collecting facial images.”

  1. Conclusion

This article provides a comprehensive overview of the use cases for thermal scanning cameras, their technical and medical limitations, the civil rights concerns surrounding them, and the up-to-date regulatory responses to their use in the fight against the spread of COVID-19 as countries are entering the first “deconfinement” stage in this pandemic. Organizations considering the deployment of temperature measuring as part of their exit strategies should carefully analyze whether the benefits of such measures outweigh the risks of discrimination, loss of opportunity, and the risks to the civil rights of the individuals who will be subjected to this type of screening en masse. Advice from public health authorities, public health specialists, and other regulators should always be part of this assessment, as well as consulting individuals who will be subjected to these measures as part of learning about their legitimate expectations when it comes to safety in the current stage of the pandemic versus other rights.   

The authors thank Charlotte Kress for her research support. 

For any inquiries, the authors can be contacted at [email protected] or [email protected]

Bipartisan Privacy Bill Would Govern Exposure Notification Services

Authors: Stacey Gray, Senior Counsel; Katelyn Ringrose, Christopher Wolf Diversity Law Fellow; and Polly Sanderson, Policy Counsel


Yesterday, Senators Cantwell (D-WA), Cassidy (R-LA), and Klobuchar (D-MN) introduced a new COVID-19 data protection bill, the Exposure Notification Privacy Act, which would create legal limits for “automated exposure notification services.” The bill comes on the heels of Republican and Democratic-led bills introduced earlier this month that would govern COVID-19 data much more broadly.

In contrast, the Exposure Notification Privacy Act would specifically regulate “exposure notification” apps, primarily mobile apps that enable individuals to receive automated alerts if they have been exposed to COVID-19. Such apps often harness Bluetooth, location data or other information from phones, to enable automated alerts for users who have come into contact with an asymptomatic person who is later diagnosed with COVID-19. The Center for Disease Control has described exposure notification systems as a complement to traditional manual techniques used to monitor the spread of COVID-19.

As cities and states begin to reopen, many public health authorities are working with private companies or not for profits to develop these apps. Large employers are also considering using exposure notification services as part of “back to work” strategies to help ensure safe working environments. In order for automated exposure notifications to be highly effective, it is estimated that 40-60% of a given population would need to install such an app. (However, contact tracing may work at much lower levels than most people think). However, recent research shows a marked lack of trust among the American population when it comes to their digital privacy amid COVID-19. For these reasons, if exposure notification methods are to be effective, trust and adoption are crucial.

“Exposure notification services can support the work of public health agencies and can help employers keep workplaces safe, but only if they are designed and implemented with privacy in mind and in the public interest. The Cantwell-Cassidy bill guarantees that data collected by mobile apps is protected by strong legal safeguards, in addition to technical measures companies put in place.” – Jules Polonetsky, CEO, Future of Privacy Forum

Below, FPF summarizes the core provisions of the Exposure Notification Privacy Act, which, if passed, would become effective immediately. If adopted, it would codify core data protection principles, such as purpose limitation. We describe below the Act’s: (1) jurisdictional and material scope; (2) obligations for covered entities; (3) anti-discrimination provisions; and (4) federal and state enforcement and oversight.

The full text of the Exposure Notification Privacy Act can be found HERE.

The section-by-section of the bill can be found HERE.

The one-pager of the bill can be found HERE.

Jurisdictional and Material Scope

Unlike other COVID-19 privacy bills recently introduced, the Exposure Notification Privacy Act has a narrow scope–applying only to entities that collect data through “automated exposure notification services,” i.e., mobile apps that enable automated alerts to those who may have been exposed to COVID-19. 

Covered entities include commercial businesses, non-profits, and common carriers; collecting or processing data that is “linked or reasonably linkable to [any] individual or device linked or reasonably linkable to an individual.” Although the bill does not contain an explicit exemption for de-identified data, covered data does not include “aggregate data.”

Importantly, this bill would not apply to the various technologies, including mobile apps, that enable traditional manual contact tracing, i.e., tracing that involves public health experts interviewing a diagnosed person and contacting friends and family who may have been exposed. For example, New York City is partnering with Salesforce to assist manual contact tracers by deploying a call center as well as a customer relationship and case management system. San Francisco and Massachusetts have also been ramping up manual contact tracing efforts. Many of those are already subject to restrictions mandating confidentiality for public health agencies.

In addition, this bill would not affect state and local government entities who are developing and implementing automated exposure notification services “in house,” without partnering with private companies or non-profits. Generally, the federal government cannot directly regulate local governments engaged in traditionally local activities such as public health. 

Obligations of Covered Entities

Under this bill, commercial entities or nonprofits that operate “automated exposure notification services” would be subject to strict legal requirements. Many of the bill’s requirements are consistent with the requirements for COVID-19 apps set by the App Store and Google Play. As a result, app developers using the API created by Google and Apple should already be substantially in compliance.

These obligations include:

Anti-Discrimination Provisions 

In addition to obligations on app providers, the bill features strong anti-discrimination provisions that would apply to restaurants, educational institutions, hotels, retailers, and other places of “public accomodation” (as defined in Section 301 of the Americans with Disabilities Act). If passed, the bill would make it unlawful for these kinds of establishments to use data from such automated exposure notification services to deny people entry, services, or otherwise discriminate against them. 

This would likely prevent these kinds of notification apps from being repurposed as immunity passports, at least to the extent that they are used to disallow someone from using public spaces “based solely on data collected or processed through an exposure notification service or an individual’s choice to use or not use” such a service. Immunity passports are methods for individuals to verify their “risk status” with respect to COVID-19 – i.e., that they have not been exposed, or are not showing symptoms for purposes of travel and work. Immunity passports have been widely criticized for their potential lack of efficacy, as well as their disparate impact on the basis of class and race. 

Enforcement and Oversight

The Exposure Notification Privacy Act’s requirements would be enforced by the Federal Trade Commission (FTC) and State Attorneys General (AGs). A violation of the bill would be treated as a violation of the FTC’s prohibition against unfair or deceptive acts or practice under the FTC Act (15 U.S.C. 57(a)(1)(B)). The bill also preserves existing rights of individuals under other federal and state laws, including consumer protection laws, civil rights laws, or common law. We expect further discussion in Congress around the issue of one federal standard, given the expected inter-state interoperability of many of the exposure notification apps. The Exposure Notification Privacy Act would become effective on the date of enactment. 

This bill would also extend the purview of the Privacy and Civil Liberties Oversight Board (PCLOB) to federally declared public health emergencies as well as federal actions used to combat terrorism. PCLOB is an independent executive branch agency that is currently tasked with ensuring that federal efforts to protect the U.S. from terrorism appropriately safeguard privacy and civil liberties.

Looking Ahead

As governments around the world grapple with “back to work” strategies for 2020 and beyond, many are considering whether and how to use exposure notification services to help contain the virus. Senator Cantwell’s proposal offers a promising legal model to build much-needed trust in such services. 

In the United States, public health authorities in North Dakota, South Dakota, Utah, Georgia, California, and others are working with private companies to develop contact tracing services. Abroad, Canada recently released “Privacy Principles for Contact Tracing,” Australia has enacted legislation for their Covidsafe tracing app to allay privacy concerns, and the UK has created a Data Ethics Advisory Board for the NHS COVID-19 App.

Meanwhile, Google and Apple have partnered to provide the interoperability and API access needed for Bluetooth-powered exposure notification services to function effectively. Both companies have outlined strict standards for apps deploying this new API, in addition to creating guidelines for any COVID-19 related apps, including those that offer medical advice, education or training services, and social support. 


Did we miss anything? Let us know at [email protected] as we continue tracking developments related to exposure notification services.

Image Credit: Photo by Mika Baumeister on Unsplash