Commoditization of Data is the Problem, Not the Solution – Why Placing a Price Tag on Personal Information May Harm Rather Than Protect Consumer Privacy

This guest post is by Lokke Moerel, a Professor of Global ICT Law at Tilburg University and Senior of Counsel at Morrison & Foerster in Berlin, and Christine Lyon, partner at Morrison & Foerster in Palo Alto, California. To learn more about FPF in Europe, please visit https://fpf.org/eu.

By Lokke Moerel and Christine Lyon[1]

Friend and foe agree that our society is undergoing a digital revolution that is in the process of transforming our society as we know it. In addition to economic and social progress, every technological revolution also brings along disruption and friction.[2] The new digital technologies (and, in particular, artificial intelligence-AI) are fueled by huge volumes of data, leading to the common saying that “data is the new oil.” These data-driven technologies transform existing business models and present new privacy issues and ethical dilemmas.[3] Social resistance to the excesses of the new data economy is becoming increasingly visible and leads to calls for new legislation.[4]

Commentators argue that a relatively small number of companies are disproportionately profiting from consumers’ data, and that the economic gap continues to grow between technology companies and the consumers whose data drives the profits of these companies.[5] Consumers are also becoming more aware of the fact that free online services come at a cost to their privacy, where the modern adage has become that consumers are not the recipients of free online services but are actually the product itself.[6]

U.S. legislators are responding by proposing prescriptive notice and choice requirements which intend to serve the dual purpose of providing consumers with greater control over the use of their personal information and at the same time enabling them to profit from that use of their information.

An illustrative example is California Governor Gavin Newsom’s proposal that consumers should “share the wealth” that technology companies generate from their data, potentially in the form of a “data dividend” to be paid to Californians for the use of their data.[7] California’s Consumer Privacy Act (CCPA) also combines the right of consumers to opt out of the sale of their data with a requirement that any financial incentive offered by companies to consumers for the sale of their personal information should be reasonably related to the value of the consumer’s data.[8]

These are not isolated examples. The academic community is also proposing alternative ways to address wealth inequality. Illustrative here is Lanier and Weyl’s proposal for the creation of data unions that would negotiate payment terms for user-generated content and personal information supplied by their users, which we will discuss in further detail below.

Though these attempts to protect, empower, and compensate consumers are commendable, the proposals to achieve these goals are actually counterproductive. The remedy is here worse than the ailment.

To illustrate the underlying issue, let’s take the example of misleading advertising and unfair trade practices. If an advertisement is misleading or a trade practice unfair, it is intuitively understood that companies should not be able to remedy this situation by obtaining consent for such practice from the consumer. In the same vein, if companies generate large revenues with their misleading and unfair practices, the solution is not to ensure consumers get their share of the illicitly obtained revenues. If anything would provide an incentive to continue misleading and unfair practices, this would be it.

As always with data protection in the digital environment, the issues are far less straightforward than in their offline equivalents and therefore more difficult to understand and address. History shows that whenever a new technology is introduced, society needs time to adjust. As a consequence, the data economy is still driven by the possibilities of technology rather social and legal norms.[9] This inevitably leads to social unrest and calls for new rules, such as the call of Microsoft’s CEO, Satya Nadella, for the U.S., China, and Europe to come together and establish a global privacy standard based on the EU General Data Protection Regulation (GDPR).[10]

From privacy is dead to privacy is the future. The point here is that not only technical developments are moving fast, but also that social standards and customer expectations are evolving.[11]

To begin to understand how our social norms should be translated to the new digital reality, we will need to take the time to understand the underlying rationales of the existing rules and translate them to the new reality. Our main point here is that that the two concepts of consumer control and wealth distribution are separate but intertwined. They seek to empower consumers to take control of their data, but they also treat privacy protection as a right that can be traded or sold. These purposes are equally worthy, but cannot be combined. They need to be regulated separately and in a different manner. Adopting a commercial trade approach to privacy protection will ultimately undermine rather than protect consumer privacy. To complicate matters further, experience with the consent-based model for privacy protection in other countries (and especially under the GDPR) shows that the consent-based model is flawed and fails to achieve privacy protection in the first place. We will first discuss why consent is not the panacea to achieve privacy protection.

 

Why Should We Be Skeptical of Consent as a Solution for Consumer Privacy?

On the surface, consent may appear to be the best option for privacy protection because it allows consumers to choose how they will allow companies to use their personal information. Consent tended to be the default approach under the EU’s Data Protection Directive, and the GDPR still lists consent first among the potential grounds for processing of personal data.[12] Over time, however, confidence in consent as a tool for privacy protection has waned.

Before GDPR, many believed that the lack of material privacy compliance was mostly due to lack of enforcement under the Directive, and that all would be well when the European supervisory authorities would have higher fining and broader enforcement powers. However, now these powers are granted under GDPR, not much has changed and privacy violations are still being featured in newspaper headlines.

By now the realization is setting in that non-compliance with privacy laws may also be created by a fundamental flaw in consent-based data protection. The laws are based on the assumption that as long as people are informed about which data are collected, by whom and for which purposes, they can then make an informed decision. The laws seek to ensure people’s autonomy by providing choices. In a world driven by AI, however, we can no longer fully understand what is happening to our data. The underlying logic of data-processing operations and the purposes for which they are used have now become so complex that they can only be described by means of intricate privacy policies that are simply not comprehensible to the average citizen. It is an illusion to suppose that by better informing individuals about which data are processed and for which purposes, we can enable them to make more rational choices and to better exercise their rights. In a world of too many choices, autonomy of the individual is reduced rather than increased. We cannot phrase it better than Cass Sunstein in his book, The Ethics of Influence(2016):

[A]utonomy does not require choices everywhere; it does not justify an insistence on active choosing in all contexts. (…) People should be allowed to devote their attention to the questions that, in their view, deserve attention. If people have to make choices everywhere, their autonomy is reduced, if only because they cannot focus on those activities that seem to them most worthy of their time.[13]

More fundamental is the point that a regulatory system that relies on the concept of free choice to protect people against consequences of AI is undermined by the very technology this system aims to protect us against. If AI knows us better than we do ourselves, it can manipulate us, and strengthening the information and consent requirements will not help.

Yuval Harari explains it well:

What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket?[14]

The reality is that organizations find inscrutable ways of meeting information and consent requirements that discourage individuals from specifying their true preferences and often make them feel forced to click “OK” to obtain access to services.[15] The commercial interests of collecting as many data as possible are so large that in practice all tricks available are often used to entice website visitors and app users to opt in (or to make it difficult for them to opt out). The design thereby exploits the predictably irrational behavior of people so that they make choices that are not necessarily in their best interests.[16] A very simple example is that consumers are more likely to click on a blue button than a gray button, even if the blue one is the least favorable option. Telling is that Google once tested 41 shades of blue to measure user response.[17] Also established companies deliberately make it difficult for consumers to make their actual choice and seem to have little awareness of doing something wrong. In comparison, if you would deliberately mislead someone in the offline world, everyone would immediately feel that this was unacceptable behavior.[18] Part of the explanation for this is that the digital newcomers have deliberately and systematically pushed the limits of their digital services in order to get their users accustomed to certain processing practices.[19] Although many of these privacy practices are now under investigation by privacy and antitrust authorities around the world,[20] we still see that these practices have obscured the view of what is or is not an ethical use of data.

Consent-based data protection laws have resulted in what is coined as mechanical proceduralism,[21] whereby organizations go through the mechanics of notice and consent, without any reflection on whether the relevant use of data is legitimate in the first place. In other words, the current preoccupation is with what is legal, which is distracting us from asking what is legitimate to do with data. We see this reflected in even the EU’s highest court having to decide whether a pre-ticked box constitutes consent (surprise: it does not) and the EDPB feeling compelled to update its earlier guidance by spelling out whether cookie walls constitute “freely given” consent (surprise: they do not).[22]

Privacy legislation needs to regain its role of determining what is and is not permissible. Instead of a legal system based on consent, we need to re-think the social contract for our digital society, by having the difficult discussion about where the red lines for data use should be rather than passing the responsibility for a fair digital society to individuals to make choices that they cannot oversee.[23]

 

The U.S. System: Notice and Choice (as Opposed to Notice and Consent)

In the United States, companies routinely require consumers to consent to the processing of their data, such as by clicking a box stating that they agree to the company’s privacy policy, although there is generally no consent requirement under U.S. law.[24] This may reflect an attempt to hedge the risk of consumers challenging the privacy terms as an ‘unfair trade practice’.[25] The argument being that the consumer made an informed decision to accept the privacy terms as part of the transaction, and that the consumer was free to reject the company’s offering and choose another. In reality, of course, consumers will have little actual choice, particularly where the competing options are limited and offer similar privacy terms. In economic terms, we have an imperfect market where companies do not compete based on privacy given their aligned interest to acquire as much personal information of consumers as possible.[26] This leads to a race to the bottom in terms of privacy protection.[27]

An interesting parallel here is that the EDPB recently rejected the argument that consumers would have freedom of choice in these cases:[28]

The EDPB considers that consent cannot be considered as freely given if a controller argues that a choice exists between its service that includes consenting to the use of personal data for additional purposes on the one hand, and an equivalent service offered by a different controller on the other hand. In such a case, the freedom of choice would be made dependent on what other market players do and whether an individual data subject would find the other controller’s services genuinely equivalent. It would furthermore imply an obligation for controllers to monitor market developments to ensure the continued validity of consent for their data processing activities, as a competitor may alter its service at a later stage. Hence, using this argument means a consent relying on an alternative option offered by a third party fails to comply with the GDPR, meaning that a service provider cannot prevent data subjects from accessing a service on the basis that they do not consent.

By now, U.S. privacy advocates also urge the public and private sectors to move away from consent as a privacy tool. For example, Lanier and Weyl argued that privacy concepts of consent “aren’t meaningful when the uses of data have become highly technical, obscure, unpredictable, and psychologically manipulative.”[29] In a similar vein, Burt argued that consent cannot be expected to play a meaningful role, “[b]ecause the threat of unintended inferences reduces our ability to understand the value of our data, our expectations about our privacy—and therefore what we can meaningfully consent to—are becoming less consequential.”[30]

Moving away from consent / choice-based privacy models is only part of the equation, however. In many cases, commentators have even greater concerns about the economic ramifications of large-scale data processing and whether consumers will share in the wealth generated by their data.

 

Disentangling Economic Objectives from Privacy Objectives

Other than a privacy concept, consent can also be an economic tool: a means of giving consumers leverage to gain value from companies for the use of their data. The privacy objectives and economic objectives may be complementary, even to the point that it may not be easy to distinguish between them. We need to untangle these objectives, however, because they may yield different results.

Where the goal is predominantly economic in nature, the conversation tends to shift away from privacy to economic inequality and fair compensation. We will discuss the relevant proposals in more detail below, but note that all proposals require that we put a ‘price tag’ on personal information.

“It is obscene to suppose that this [privacy] harm can be reduced to the obvious fact that users receive no fee for the raw material they supply. That critique is a feat of misdirection that would use a pricing mechanism to institutionalize and therefore legitimate the extraction of human behavior for manufacturing and sale.” – Zuboff, p. 94.

  1. No Established Valuation Method

Despite personal information being already bought and sold among companies, such as data brokers, there is not yet an established method of calculating the value of personal information.[31] Setting one method will likely prove impossible under all circumstances. For example, the value of such data to a company will depend on the relevant use, which may well differ per company. The value of data elements often also differs depending on the combination of data elements available, whereby data analytics of mundane data may lead to valuable inferences that can be sensitive for the consumer. How much value should be placed on the individual data elements, as compared with the insights the company may create by combining these data elements or even by combining these across all customers?[32]

The value of data to a company may further have little correlation with the privacy risks to the consumer. The cost to consumers may depend not only on the sensitivity of use of their data but also on the potential impact if their data are lost. For example, information about a consumer’s personal proclivities may be worth only a limited dollar amount to a company, but the consumer may have been unwilling to sell that data to the company for that amount (or, potentially, for any amount). When information is lost, the personal harm or embarrassment to the individual may be much greater than the value to the company. The impact of consumers’ data being lost will also often depend on the combination of data elements. For instance, an email address is not in itself sensitive data, but in combination with a password, it becomes highly sensitive as people often use the same email/password combination to access different websites.

Different Approaches to Valuation

One approach might be to leave it to the consumer and company to negotiate the value of the consumer’s data to that company, but this would be susceptible to all of the problems discussed above, such as information asymmetries and unequal bargaining power. It may also make privacy a luxury good for the affluent, who would feel less economic pressure to sell their personal information, thus resulting in less privacy protection for consumers who are less economically secure.[33]

Another approach is suggested by Lanier and Weyl and would require companies to pay consumers for using their data, with the payment terms negotiated by the equivalent of new entities similar to labor unions that would engage in collective bargaining with companies over data rights.[34] However, this proposal also would require consumers to start paying companies for services that today are provided free of charge in exchange for the consumer’s data, such as email, social media, and cloud-based services. Thus, a consumer may end up ahead or behind financially, depending on the cost of the services that the consumer chooses to use and the negotiated value of the consumer’s data.

A third approach may involve the “data dividend” concept proposed by Governor Newsom. As the concept hasn’t yet been clearly defined, some commentators suggest that this proposal involves individualized payments directly to consumers, while others suggest that payments are to be made into a government fund from which fixed payments would be made to consumers, similar to the Alaska pipeline fund that sought to distribute some of the wealth generated from Alaska’s oil resources to its residents. Given that data has been called the “new oil,” the idea of a data dividend modeled on the Alaska pipeline payments may seem apt, although the analogy quickly breaks down due to the greater difficulty of calculating the value of data.[35] Moreover, commentators have rightly noted that the data dividend paid to an individual is likely to be mere “peanuts,” given the vast numbers of consumers whose information is being used.[36]

Whatever valuation and payment model – if any – might be adopted, it risks devaluing privacy protection. The data dividend concept, as well as the CCPA’s approach to financial incentives, each suggest that the value of a consumer’s personal information is measured by its value to the company.[37] As indicated before, this value may have little correlation with the privacy risks to the consumer.  Though it is commendable that these proposals seek to provide some measure of compensation to consumers, it is important to avoid conflating economic and privacy considerations, and avoid a situation where consumers will be trading away their data or privacy rights.[38] Although societies certainly may decide to require some degree of compensation to consumers as a wealth redistribution measure, it will be important to present this as an economic tool and not as a privacy measure.

 

Closing Thoughts

As the late Giovanni Buttarelli in his final vision statement forewarned, “Notions of ‘data ownership’ and legitimization of a market for data risks a further commoditization of the self and atomization of society…. The right to human dignity demands limits to the degree to which an individual can be scanned, monitored and monetized—irrespective of any claims to putative ‘consent.’”[39]

There are many reasons why societies may seek to distribute a portion of the wealth generated from personal information to the consumers who are the source and subject of this personal information. This does not lessen the need for privacy laws to protect this personal information, however. By distinguishing clearly between economic objectives and privacy objectives, and moving away from consent-based models that fall short of both objectives, we can best protect consumers and their data, while still enabling companies to unlock the benefits of AI and machine learning for industry, society, and consumers.

[1]Lokke Moerel is a Professor of Global ICT Law at Tilburg University and Senior of Counsel at Morrison & Foerster in Berlin. Christine Lyon is partner at Morrison & Foerster in Palo Alto, California.

[2]E. Brynjolfsson & A. McAfee, Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant New Technologies, London: W.W. Norton & Company 2014, which gives a good overview of the friction and disruption that arose from the industrial revolution and how society ultimately responded and regulated negative excesses and a description of the friction and disruption caused by the digital revolution. A less accessible, but very instructive, book, on the risks of digitization and big tech for society is S. Zuboff, The Age of Surveillance Capitalism, New York: Public Affairs 2019 (hereinafter, “Zuboff 2019”).

[3]An exploration of these new issues, as well as proposals on how to regulate the new reality from a data protection perspective, can be found in L. Moerel, Big Data Protection: How to Make the Draft EU Regulation on Data Protection Future Proof(oration Tilburg), Tilburg: Tilburg University 2014 (hereinafter, “Moerel 2014”), pp. 9-13, and L. Moerel & C. Prins, Privacy for the Homo Digitalis: Proposal for a New Regulatory Framework for Data Protection in the Light of Big Data and the Internet of Things(2016), [ssrn.com/abstract=2784123] (hereinafter, “Moerel & Prins 2016”). On ethical design issues, see J. Van den Hoven, S. Miller & T. Pegge (eds.), Designing in Ethics, Cambridge: CUP 2017 (hereinafter, “Van den Hoven, Miller & Pegge 2017”), p. 5.

[4]L. Vaas, “FTC renews call for single federal privacy law,” Naked Security by Sophos(May 10, 2019), https://nakedsecurity.sophos.com/2019/05/10/ftc-renews-call-for-single-federal-privacy-law/.

[5]Jaron Lanier and E. Glen Weyl, “A Blueprint for a Better Digital Society,” Harvard Business Review(Sept. 26, 2018), https://hbr.org/2018/09/a-blueprint-for-a-better-digital-society.

[6]Zuboff 2019, p. 94, refers to this by a now commonly cited adage, but nuances it by indicating consumers are not the product, but rather “[the objects from which raw materials are extracted and expropriated for Google’s prediction factories. Predictions about our behavior are Google’s products, and they are sold to its actual customers but not to us.”

[7]Angel Au-Yeung, “California Wants to Copy Alaska and Pay People a ‘Data Dividend.’ Is It Realistic?” Forbes(Feb. 14, 2019), https://www.forbes.com/sites/angelauyeung/2019/02/14/california-wants-to-copy-alaska-and-pay-people-a-data-dividend–is-it-realistic/#30486ee6222c.

[8]Cal. Civ. Code § 1798.125(b)(1) (“A business may offer financial incentives, including payments to consumers as compensation for the collection of personal information, the sale of personal information, or the deletion of personal information. A business may also offer a different price, rate, level, or quality of goods or services to the consumer if that price or difference is directly related to the value provided to the business by the consumer’s data”). The California Attorney General’s final proposed CCPA regulations, issued on June 1, 2020 (Final Proposed CCPA Regulations), expand on this obligation by providing that a business must be able to show that the financial incentive or price or service difference is reasonably related to the value of the consumer’s data. (Final Proposed CCPA Regulations at 20 CCR § 999.307(b).)  The draft regulations also require the business to use and document a reasonable and good faith method for calculating the value of the consumer’s data. Id. 

[9]Moerel 2014, p. 21.

[10]Isobel Asher Hamilton, “Microsoft CEO Satya Nadella made a global call for countries to come together to create new GDPR-style data privacy laws,” Business Insider(Jan. 24, 2019), available at https://www.businessinsider.com/satya-nadella-on-gdpr-2019-1.

[11]L. Moerel, Reflections on the Impact of the Digital Revolution on Corporate Governance of Listed Companies,first published in Dutch by Uitgeverij Paris in 2019, and written in assignment of the Dutch Corporate Law Association for their annual conference, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3519872, at para. 4.

[12]GDPR Art. 6(1): “Processing shall be lawful only if and to the extent that at least one of the following applies:

(a) the data subject has given consent to the processing of his or her personal data for one or more specific purposes….”

[13]Cass Sunstein, The Ethics of Influence, Cambridge University Press 2016 (hereinafter: Sunstein 2016), p. 65. .

[14]Yuval Noah Harari, Homo Deus: A History of Tomorrow, Harper 2017(hereinafter: Harari 2017), p. 277.

[15]This is separate from the issue that arises where companies require a consumer to provide consent for use of their data for commercial purposes, as a condition of receiving goods or services (so-called tracking walls and cookies walls). It also may arise if a consumer is required to provide a bundled consent that covers multiple data processing activities, without the ability to choose whether to consent to a particular data processing activity within that bundle. In May 2020, the European Data Protection Board (EDPB) updated its guidance on requirements of consent under the GDPR, now specially stating that consent is not considered freely given in the case of cookie walls, see EDPB Guidelines 05/2020 on consent under Regulation 2016/679 Version 1.0, Adopted on May 4, 2020, available at https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_guidelines_202005_consent_en.pdf(EDPB Guidelines on Consent 2020).

[16]This is the field of behavioral economics. See D. Ariely, Predictably Irrational, London: HarperCollinsPublishers 2009 (hereinafter, “Ariely 2009”), at Introduction. For a description of techniques reportedly used by large tech companies, see the report from the Norwegian Consumer Council, Deceived by Design: How tech companies use dark patterns to discourage our privacy rights(June 27, 2018), available at https://fil.forbrukerradet.no/wp-content/uploads/2018/06/2018-06-27-deceived-by-design-final.pdf(hereinafter, “Norwegian Consumer Council 2018”). The Dutch Authority for Consumers & Market (ACM) has announced that the abuse of this kind of predictable irrational consumer behavior must cease and that companies have a duty of care to design the choice architecturein a way that is fair and good for the consumer. Authority for Consumers & Market, Taking advantage of predictable consumer behavior online should stop(Sept. 2018), available at https://fil.forbrukerradet.no/wp-content/uploads/2018/06/2018-06-27-deceived-by-design-final.pdf.

[17]See Norwegian Consumer Council 2018, p. 19, reference L.M. Holson, “Putting a Bolder Face on Google,” New York Times(Feb. 28, 2009), www.nytimes.com/2009/03/01/business/01marissa.html.

[18]See Van den Hoven, Miller & Pegge 2017, p. 25, where the ethical dimension of misleading choice architecture is well illustrated by giving an example in which someone with Alzheimer’s is deliberately confused by rearranging his or her system of reminders. For an explanation of a similar phenomenon, see Ariely 2009, Introduction and Chapter 14: “Why Dealing with Cash Makes Us More Honest,” where it is demonstrated that most unfair practices are one step removed from stealing cash. Apparently, it feels less bad to mess around in accounting than to steal real money from someone.

[19]Zuboff 2019 convincingly describes that some apparent failures of judgmentthat technology companies’ management regard as misstepsand bugs(for examples, see p. 159), are actually deliberate, systematic actions intended to habituate their users to certain practices in order to eventually adapt social norms. For what Zuboff 2019 calls the Disposition Cycle, see pp. 138-166.

[20]Zuboff 2019 deals extensively with the fascinating question of how it is possible that technology companies got away with these practices for so long. See pp. 100-101.

[21]Moerel & Prins 2016, para. 3.

[22]EDPB Guidelines on Consent, p. 10.

[23]Lokke Moerel, IAPP The GDPR at Two: Expert Perspectives, “EU data protection laws are flawed — they undermine the very autonomy of the individuals they set out to protect”, 26 May 2020, https://iapp.org/resources/article/gdpr-at-two-expert-perspectives/.

[24]U.S. privacy laws require consent only in limited circumstances (e.g., the Children’s Online Privacy Protection Act, Fair Credit Reporting Act, and Health Insurance Portability and Accountability Act), and those laws typically would require a more specific form of consent in any event.

[26]See the following for discussion of why, from an economic perspective, information asymmetries and transaction cost lead to market failure which require legal intervention, Frederik. J. Zuiderveen Borgesius, “Consent to Behavioural Targeting in European Law – What Are the Policy Implications of Insights From Behavioural Economics,” Amsterdam Law School Legal Studies Research Paper No. 2013-43, Institute for Information Law Research Paper No. 2013-02 -(hereinafter: Borgesius 2013), pp. 28 and 37, SSRN-id2300969.pdf.

[28]EDPB Guidelines on Consent, p. 10.

[29]Lanier and Weyl, “A Blueprint for a Better Digital Society,” Harvard Business Review(Sept. 26, 2018).

[30]Andrew Burt, “Privacy and Cybersecurity Are Converging. Here’s Why That Matters for People and for Companies,” Harvard Business Review(Jan. 3, 2019), https://hbr.org/2019/01/privacy-and-cybersecurity-are-converging-heres-why-that-matters-for-people-and-for-companies.

[31]See, e.g., Adam Thimmsech, “Transacting in Data: Tax, Privacy, and the New Economy,” 94 Denv. L. Rev. 146 (2016) (hereinafter, “Thimmsech”), pp. 174-177 (identifying a number of obstacles to placing a valuation on personal information and noting that “[u]nless and until a market price develops for personal data or for the digital products that are the tools of data collection, it may be impossible to set their value”). See also Dante Disparte and Daniel Wagner, “Do You Know What Your Company’s Data Is Worth?” Harvard Business Review(Sept. 16, 2016) (explaining the importance of being able to accurately quantify the enterprise value of data (EvD) but observing that “[d]efinitions for what constitutes EvD, and methodologies to calculate its value, remain in their infancy”).

[32]Thimmsech at 176: “To start, each individual datum is largely worthless to an aggregator. It is the network effects that result in significant gains to the aggregator when enough data are collected. Further complicating matters is the fact that the ultimate value of personal data to an aggregator includes the value generated by that aggregator through the use of its algorithms or other data-management tools. The monetized value of those data is not the value of the raw data, and isolating the value of the raw data may be impossible.”

[33]Moerel & Prins 2016, para. 2.3.2. See also Morozov, Evengy (2013), “To Save Everything Click Here. The Folly of Technological Solutionism,” Public Affairs, whowarns that for pay-as-you-liveinsurance for some people the choice will not be a fully free one, since those on a limited budget may not be able to afford privacy-friendly insurance. After all, it is bound to be more expensive.

[34]Lanier and Weyl, “A Blueprint for a Better Digital Society,” Harvard Business Review(Sept. 26, 2018) (“For data dignity to work, we need an additional layer of organizations of intermediate size to bridge the gap. We call these organizations ‘mediators of individual data,’ or MIDs. A MID is a group of volunteers with its own rules that represents its members in a wide range of ways. It will negotiate data royalties or wages, to bring the power of collective bargaining to the people who are the sources of valuable data….”). Lanier extends this theory more explicitly to personal information in his New York Times video essay at https://www.nytimes.com/interactive/2019/09/23/opinion/data-privacy-jaron-lanier.html. See also Imanol Arrieta Ibarra, Leonard Goff, Eigo Jiminez Hernandez, Jaron Lanier, and E. Glen Weyl, “Should We Treat Data as Labor?: Moving Beyond “Free,” American Economic Association Papers & Proceedings, Vol. 1, No. 1 (May 2018) at https://www.aeaweb.org/articles?id=10.1257/pandp.20181003, at p. 4 (suggesting that data unions could also exert power through the equivalent of labor strikes: “[D]ata laborers could organize a “data labor union” that would collectively bargain with [large technology companies]. While no individual user has much bargaining power, a union that filters platform access to user data could credibly call a powerful strike. Such a union could be an access gateway, making a strike easy to enforce and on a social network, where users would be pressured by friends not to break a strike, this might be particularly effective.”).

[35]See, e.g.,Marco della Cava, “Calif. tech law would compensate for data,” USA Today(Mar. 11, 2019) (“[U]nlike the Alaska Permanent Fund, which in the ’80s started doling out $1,000-and-up checks to residents who were sharing in the state’s easily tallied oil wealth, a California data dividend would have to apply a concrete value to largely intangible and often anonymized digital information. There also is concern that such a dividend would establish a pay-for-privacy construct that would be biased against the poor, or spawn a tech-tax to cover the dividend that might push some tech companies out of the state.”).

[36]Steven Hill, “Opinion: Newsom’s California Data Dividend Idea is a Dead End,” East Bay Times (Mar. 7, 2019) (“While Newsom has yet to release details…the money each individual would receive amounts to peanuts. Each of Twitter’s 321 million users would receive about $2.83 [if the company proportionally distributed its revenue to users]; a Reddit user about 30 cents. And paying those amounts to users would leave these companies with zero revenue or profits. So in reality, users would receive far less. Online discount coupons for McDonald’s would be more lucrative.”).

[37]Cal. Civ. Code § 1798.125(a)(2) (“Nothing in this subdivision prohibits a business from charging a consumer a different price or rate, or from providing a different level or quality of goods or services to the consumer, if that difference is reasonably related to the value provided to the business by the consumer’s data.”). The CCPA originally provided that the difference must be “directly related to the value provided to the consumerby the consumer’s data,” but it was later amended to require the difference to be “directly related to the value provided to the business by the consumer’s data.” (Emphases added.) The CCPA does not prescribe how a business should make this calculation. The Final Proposed CCPA Regulations would require businesses to use one or more of the following calculation methods, or “any other practical and reliable method of calculation used in good-faith” (Final Proposed CCPA Regulations, 20 CCR § 999.307(b)):

[38]See in a similar vein the German Data Ethics Commission, Standards for the Use of Personal Data, Standard 6:6, where it argues that data should not be referred to as a “counter-performance” provided in exchange for a service, even though the term sums up the issue in a nutshell and has helped to raise awareness among the general public. https://www.bmjv.de/SharedDocs/Downloads/DE/Themen/Fokusthemen/Gutachten_DEK_EN.pdf?__blob=publicationFile&v=2.

[39]International Association of Privacy Professionals, Privacy 2030 for Europe: A New Vision for Europe at p. 19, https://iapp.org/media/pdf/resource_center/giovanni_manifesto.pdf.

 

 

 

Remarks on Diversity and Inclusion by Michael McCullough

Last Thursday, June 18, 2020, Macy’s Chief Privacy Officer and FPF Advisory Board member Michael McCullough spoke about diversity and inclusion at WireWheel’s Spokes 2020 conference. 

The Question:

I’ve spoken to each of you about your views on diversity and equality, and about how that’s reflected in our privacy and data protection community. This has been an especially important time for that, but our community needs to help drive on these important issues. How do you think we can do that?

Response:

This is such a soul-crushing topic that we HAVE to tackle and keep tackling and keep talking about. People of conscience and goodwill MUST constantly strive for fairness and egalitarianism in all aspects of our society.

This is /// a painful time for me and so many others… and it’s hard to talk about – certainly in this setting. But I’m going to do it because it matters. And Justin, I appreciate your candor, openness and clear commitment to having these conversations.///

I get asked a lot – personally and professionally: /// “What are you; I mean what’s your background?”/// I respond – generally – cheerfully enough. I am half black and half white. /// But none of that is even close to what I AM. That is the most uninteresting, insipid question ABOUT ME. /// I know most mean no harm – it may be of genuine interest. But it’s really more about the asker, a crutch to know what box to put me in – even if subconsciously. Even if that is not the case, it FEELS like that’s the case.

So…I am black and white. While more and more people today are a mix of some “races,” I grew up with the knowledge that my mere existence was unlawful in some quarters due to anti-miscegenation laws…and you may be surprised to know this repugnant language remained in state constitutions after Loving v Virginia — till 1987 in Mississippi; 1998 in South Carolina and 2000 in Alabama.

Being both black and white I KNOW how hard this topic is. For some white people, there’s guilt and facing up to one’s own prejudices and privileges that are uncomfortable /// or a semblance of shame for feeling they’ve not done enough to be part of the solution, /// and dozens of other discomfiting sensations.

For many black folks, there’s anguish, historical… living history and immediate pain. There’s rage. Productive, real conversations are hard when there is existential pain. And sometimes, at least for me, those conversations feel like bowling for soup and a poultice for someone else’s wound that’s doing just enough for an entry in the CSR (Corporate Sustainability Report). I myself, and so many folks I have talked to, are self-policing on how to even talk about racism and white supremacy (especially in a professional/corporate environment): /// there’s hyper awareness ‘to be measured’ so as not to appear strident and feed into a stereotype (or be cast as too emotive for business). There’s personal risk in this talk.

So, this is my background to answer the question, “how do we think we can drive diversity and inclusion?”

We all know this is an institutional, systemic problem. Structural discrimination doesn’t require conscious racists or sexists. We all know there is no silver bullet. All we can do is start tearing down those institutions, this increasingly colonizing white supremacy sexist archipelago…one brick at a time.

Business efforts have to move from compensatory to reparative. Strategies for Diversity & Inclusion – commitments to reevaluate hiring practices, ensuring diversity in supply chain and vendors — are compensatory. As the adage goes “culture eats strategy for breakfast.” We need to planfully invest in building cultures of fairness and equity.

We are all getting the emails and messaging from companies stating their commitment to optimize diversity and inclusion and their strategies to build an inclusive environment. 130 plus years late – Aunt Jemima is being retired, as are Uncle Ben and Ms. Butterworth. Seriously! It’s 2020. These are major companies. I know they have Equal Opportunity policies and pro-diversity programs…. Something is wrong. Something is very wrong.

Then you have Ben and Jerry’s statement. /// I am shocked and disappointed that their statement entitled “We Must Dismantle White Supremacy: Silence Is NOT An Option” was so shocking. THAT stake in the ground is REPARATIVE. That is a culture bomb. We can no longer just be pro-diversity; pro-fairness; pro-equity; pro-black; pro-women; pro-Jewish or Sikh — we have to become culturally ANTI-RACIST; ANTI-SEXIST; ANTI-DISCRIMINATORY. That semantic difference matters — here’s why:

Relevant to our Worlds and what we control… We are very good at measuring and managing risk. There’s always a balance and certain tolerances – a calculus. But the “purpose” of risk management ultimately is economic and — in quotations “fiduciary.” Within that calculus we are less good at recognizing and accounting for “harms.” /// Bias harms are hard to tease out… and sometimes it is hard to get people to understand why bias harms are bad and should be cured; not managed… this is increasingly true when the “harms” potentially affect “only” small groups or are individualized, which is relevant as we increasingly pursue meaningful personalization. Risk management, we are good at the big stuff; less good at the small stuff. We need to be hyper aware of how we group and categorize “others.” Mere categorization can lead to harm…even if unintentional…(“WHAT ARE YOU, I mean what’s your background”) and especially when profitable, because “racism is productive”. In the obvious and clearly egregious cases, Aunt Jemima and Uncle Ben’s are profitable. /// So, what to do?

We have all had the rousing IT exec that makes a “zero defect environment” a MISSION. Is it achievable? No…. But it gives purpose beyond the traffic-lighting we can get caught up in in the day-to-day. Likewise, we can set a standard as “zero discrimination/zero harm” (zero defect) in our data practices. It gives mission and purpose to what otherwise is simply effective management. This mission can be supported operationally by assurance activities like adding, maintaining and appropriately updating equity analysis for code audits.

Many of the other things we can do have been talked about for years.

Demanding not just diversity on boards, but people with a demonstrated commitment to fairness and equity as a mission. And I mean women, I mean people of color, I mean trans and non-binary people. The co-founder of Reddit stepped down from its board and asked to be replaced by a person of color after recognizing his own white privilege that he had come to recognize due to his marriage to Serena Williams and after having bi-racial children.

Commit to a diverse pipeline and curate talent (so don’t just do executive recruiting at schools, invest in, co-design the programs that teach and build zero discrimination coding and design), /// measure it and seek feedback internally and externally. Defang an inarguably unfair and institutionally white supremacist carceral system by promoting programs and focusing on giving formerly incarcerated a hand up. Go beyond including diverse images in your spaces; seek out and ensure diverse image makers and story tellers are contributing to spaces. Give time, dollars and people to groups that are challenging status quo approaches to tech and be partners in experimentation. Challenge filter bubbles in your own organizations — I’ve seen the breakrooms and lunch halls and all hands meetings. As leaders, seek diversity in your mentorship circle; reverse mentor with diverse people.

We, specifically, have an opportunity as a community NOT to further export existing bias, structural racism and sexism into Code, /// and to begin unwinding and righting that ship, today. This is a singular moment to do that.

I believe that people, especially in this community, are overwhelmingly good and fair – we choose careers that protect people. /// But complacency in the face of complexity and difficulty, no matter how subdermal, is not an option /// – unpacking the non-obvious and finding solutions for the complex and difficult is what we do! /// This (moment; this need) will not pass – we have to reframe and reshape our corporate cultures; we have to be more than allies, but partners in liberation, fairness and equity for all.

Now, I believe deeply in free speech. When I became a Marine, I took an oath to protect and defend the Constitution. I would fight and die to protect 1st Amendment speech I find abhorrent. But racists and sexists should have no harbor in business and we have to do more than just be — PRO. We have to dig in, do the hard work and excise the business pro-ductivity of sexism and racism.

Finally, I just read a book with Future of Privacy Forum called “Race After Technology: Abolitionist Tools for the New Jim Code” by Princeton Sociologist Ruha Benjamin. She goes deep into the structures and encoding of white supremacy, the way that it infects CODE, and how “racism is productive.” It’s revelatory and worth the read to at least spark the imagination for “what can we do” (and “who do we want to BE”).

Postscript:

If you found my comments compelling in any way, I urge you to read Ruha’s book and the work of the many scholars illuminating the historical contexts, costs and caustic impacts of white supremacy and racism on our society today. I urge you to really listen to and co-imagine reshaping company cultures with your colleagues who bring life experience with racism and bias to the workplace. I urge all of us to reflect on our own roles and opportunities to harness this moment to drive critical change.

Michael ‘Mac’ McCullough is the Chief Privacy Officer and GRC Leader at Macy’s, a former Marine, and a member of the FPF Advisory Board. These remarks were delivered in his personal capacity and are shared here to mark Juneteenth. The remarks are lightly edited.

Juneteenth

FPF is closed for Juneteenth as our staff reflects on both the history and current state of racism in America.  Our social media accounts will be silent, other than to elevate voices that can help us learn and take action on issues such as equity and inclusion.

In that spirit, we would like to call attention to the work of Professor Ruha Benjamin and her book Race After Technology: Abolitionist Tools for the New Jim Code. The FPF Privacy Book Club was honored to learn from Professor Benjamin this week and we invite you to watch the video and order her book. We found it to be a thought-provoking commentary on how emerging technologies can reinforce white supremacy and deepen social inequity. We would also like to call attention to 15+ Books by Black Scholars the Tech Industry Needs to Read Now, posted by the Center for Critical Internet Inquiry at UCLA.

Supreme Court Rules that LGBTQ Employees Deserve Workplace Protections–More Progress is Needed to Combat Unfairness and Disparity

Authors: Katelyn Ringrose (Christopher Wolf Diversity Law Fellow) and Dr. Sara Jordan (Policy Counsel, Artificial Intelligence and Ethics)

Today’s Supreme Court ruling in Bostock v. Clayton County—clarifying that Title VII of the Civil Rights Act bans employment discrimination on the basis of sexual orientation and gender identity—is a major victory in the fight for LGBTQ civil rights. Title VII established the Equal Employment Opportunity Commission (EEOC), and bans discrimination on the basis of sex, race, color, national origin and religion by employers, schools, and trade unions involved in interstate commerce or those doing business with the federal government. Today’s 6-3 ruling aligns with Obama-era protections, including a 2014 executive order extending Title VII protections to LGBTQ individuals working for the federal contractors. 

In this post, we examine the impact of today’s decision, as well as (1) voluntary anti-discrimination efforts adopted by companies for activities not subject to federal protections; (2) helpful resources on the nexus of privacy, LGBTQ protections, and big data; and (3) the work FPF has done to identify and mitigate potential harms posed by automated decision-making. 

In Bostock, the Supreme Court determined that discrimination on the basis of sexual orientation or transgender status are forms of sex discrimination, holding: “Today, we must decide whether an employer can fire someone simply for being homosexual or transgender. The answer is clear. An employer who fires an individual for being homosexual or transgender fires that person for traits or actions it would not have questioned in members of a different sex. Sex plays a necessary and undisguisable role in the decision, exactly what Title VII forbids.” 

Bostock resolved the issue through analysis of three cases:

“Today is a great day for the LGBTQ community and LGBTQ workers across the nation. The United States Supreme Court decision could not have come at a better time given the current COVID-19 crisis and the protests taking place across the country. However, there still remains much work to be done, especially around the areas of data and surveillance tools. The well-documented potential for abuse and misuse of these tools by unregulated corporations as well as government and law enforcement agencies should give serious pause to anyone who values their privacy–especially members of communities like ours that have been historically marginalized and discriminated against,” says Carlos Gutierrez, Deputy Director & General Counsel of LGBT Tech. “Today’s decision will protect over 8 million LGBT workers from work discrimination based on their sexual orientation or gender identity. This is especially heartening given that 47% or 386,000 of LGBTQ health care workers, people on the frontlines of the COVID-19 battle, live in states that had no legal job discrimination protections.” 

We celebrate today’s win. However, it is now more critical than ever to address data-driven unfairness that remains legally permissible and harmful to the LGBTQ community. 

Bostock should also influence a range of anti-discrimination efforts. In recent years, many organizations have engaged in various efforts to combat discrimination even when their activities are not directly regulated by the Civil Rights Act. When implementing such anti-discrimination programs, organizations often look to the Act to identify protected classes and activities. Bostock provides clarity — organizations  should include sexual orientation and gender identity in the list of protected classes even if their activities wouldn’t otherwise be regulated under Title VII. 

Anti-Discrimination Efforts //

Title VII of the the Civil Rights Act has historically barred discrimination on the basis of sex, race, color, national origin and religion; the Civil Rights Act, including Title VII, is the starting point for anti-discrimination compliance programs. Even companies that do not have direct obligations under the Act (including ad platforms) have utilized the Act to guide their anti-discrimination efforts (see the Network Advertising Initiative’s Code of Conduct). According to the Human Rights Campaign, the number of Fortune 100 companies that have publicly pledged to non-discrimination employment policies on the basis of gender identity increased from 11% (gender identity) and 96% in 2003 (sexual orientation) to 97% and 98% respectively by 2018. 

We caution that simply not collecting or ignoring sensitive information will not always be a solution that ensures discrimination is avoided. Even without explicit data, proxy information can reveal sensitive information. Furthermore, in order to assess whether protected classed are treated unfairly, it will sometimes be important to collect information that can identify discrimination. While sensitive data collection has its benefits and risks, the lack of data available to researchers can mean that policymakers do not have the information necessary to understand disparities in enough depth to create responsive policy solutions.

Helpful Resources // 

Unfairness by Algorithm //

While discriminatory decisions made by a human are clearly regulated, the full range of potentially discriminatory decisions made by a computer are not yet well understood. Yet algorithmic harms may be similarly pernicious, as well as more difficult to identify or amenable to redress using available legal remedies.

In a 2017 Future of Privacy Forum report, Unfairness by Algorithm: Distilling the Harms of Automated Decision Making, we identified four types of harms—loss of opportunity, economic loss, social detriment, and loss of liberty—to depict the various spheres of life where automated decision-making can cause injury. The report recognizes that discriminatory decisions and resulting unfairness as determined by algorithms can lead to distinct collective and societal harms. For example, use of proxies, such as “gayborhood” ZIP codes in algorithms or resume clues regarding LGBTQ community activism, can lead to employment discrimination and result in the same differential access to job opportunities. 

As organizations commit to LGBTQ protections, an adherence to data protection and fairness principles are one way to battle systemic discrimination. These principles include ensuring fairness in automated decisions, enhancing individual control of personal information, and protecting people from inaccurate and biased data. 

Conclusion //

Today’s decision regarding workplace protections could not be more welcome, particularly now as data from the Human Rights Campaign shows that 17% of LGBTQ people and 22% of LGBTQ people of color have reported becoming unemployed as a result of COVID-19. However, the fight for inclusivity and equality does not stop with law and legislation. Further work is necessary to ensure that data-driven programs uncover and redress discrimination, rather than perpetuate it.

Associated Press: Schools debate whether to detail positive tests for athletes

In a recent article published by the Associated Press in The Washington Post and The New York Times, the Future of Privacy Forum warns of the privacy risks of sharing information about positive COVID-19 tests among students, particularly student athletes who have already returned to campus to prepare for the upcoming sports season. Read an excerpt below and see the full article here.

Athletic programs sometimes avoid making formal injury announcements, citing the Health Insurance Portability and Accountability Act (HIPAA) or the Family Educational Rights and Privacy Act (FERPA). Both are designed to protect the privacy of an individual’s health records. The U.S. Education Department issued guidelines in March that said a school shouldn’t disclose personal identifiable information from student education records to the media even if it determines a health or safety emergency exists.

But is merely revealing a number going to enable anyone to identify which athletes tested positive? That’s up for debate.

Amelia Vance is the director of youth and education privacy at the Future Privacy Forum, a think tank dedicated to data privacy issues. Vance believes releasing the number of positive tests effectively informs the public without sacrificing privacy.

Vance said disclosing the number of positive tests for a certain team would help notify members of the general public who may have come into contact with the athletes and could serve as a guide to those schools that haven’t welcomed students back to campus yet.

“If you’re saying six students tested positive or a student was exposed and therefore we’re having the whole team tested or things like that, that wouldn’t probably be traced back to an individual student,” Vance said. “Therefore, neither (FERPA or HIPAA) is going to apply, so any claim that privacy laws wouldn’t allow that disclosure would be disingenuous.

“The key there is to balance the public interest with the privacy of the students,” she said. “Most of the time, the information colleges and universities need to disclose don’t require the identification of a particular student to the press or general public.”

Read the article here.

TEN QUESTIONS ON AI RISK

Gauging the Liabilities of Artificial Intelligence ​Within​ Your Organization

Artificial intelligence and machine learning (AI/ML) generate significant value when used responsibly – and are the subject of growing investment for exactly these reasons. But AI/ML can also amplify organizations’ exposure to potential vulnerabilities, ranging from fairness and security issues to regulatory fines and reputational harm.

Many businesses are incorporating ever more machine-learning based models into their operations, both on the backend and in consumer facing contexts. For those companies who are not developers of these systems themselves, but who use these systems, they assume the responsibility of managing, overseeing, and controlling these algorithmically-based learning models, in many cases without extensive internal resources to meet the technical demands they incur.

General application toolkits for this challenge are not yet broadly available, and to help fill that gap while more technical support is developed, we have created a checklist focused on asking questions to carry out sufficient oversight for these systems. The questions in the attached checklist – “Ten Questions on AI Risk” – are meant to serve as an initial guide to gauging these risks, both during the build phase of AI/ML endeavors and beyond.

While there is no “one size fits all” answer for how to manage and monitor AI systems, these questions will hopefully provide a guide for companies using such models, allowing them to customize the questions and frame the answers in contexts specific to their own products, services, and internal operations. We hope to build on this start and offer additional, detailed resources for such organizations in the future.

The attached document was prepared by bnh.ai, a boutique law firm specializing in AI/ML analytics, in collaboration with the Future of Privacy Forum.

Polonetsky: Are the Online Programs Your Child’s School Uses Protecting Student Privacy? Some Things to Look For

Op-ed by Future of Privacy Forum CEO Jules Polonetsky published in The74.

As CEO of a global data protection nonprofit, I spend my workdays focused on helping policymakers and companies navigate new technologies and digital security concerns that have emerged in the wake of the COVID-19 pandemic.

Meanwhile, my children have adopted many of these technologies and are participating in online learning via Zoom and dozens of other platforms and apps — some of which have sparked serious concerns about student privacy and data security in the classroom.

These things are not contradictory. Here’s why.

Specific laws have been put in place to protect especially sensitive types of data. Your doctor uses services that safeguard your health information, and your bank relies on technology vendors that agree to comply with financial privacy laws.

Similarly, as the use of technology in the classroom skyrocketed in the past decade, federal and state laws were established that require stringent privacy protections for students.

To comply, many general consumer companies like Google, Apple and Microsoft developed education-specific versions of their platforms that include privacy protections that limit how they will use student information. School districts set up programs to screen ed tech software, even though few of the new laws came with funding.

But many of these federal and state protections apply only to companies whose products are designed for schools, or if schools have a privacy-protective contract with vendors. As schools rushed to provide distance learning during their coronavirus shutdowns, some of the tools adopted were not developed for educational environments, leaving children’s data at risk for sale or marketing uses.

If your child’s school has rolled out new technology platforms for online learning, there are important steps you can take to determine whether the tool includes adequate safeguards to protect student privacy. First, ask whether your school has vetted the company or has a contract in place that includes specific limitations on how student information can be used. Don’t hesitate to ask your child’s teacher to explain what data may be collected about your child and how it will be used — you have a right to this information.

Second, check to see if the company has signed the Student Privacy Pledge, which asks companies that provide technology services to schools to commit to a set of 10 legally binding obligations. These include not selling students’ personal information and not collecting or using students’ personal information beyond what is needed for the given educational purposes. More than 400 education technology companies have signed the pledge in recent years, so this can be a quick resource for identifying businesses that have demonstrated a commitment to ensuring that student data are kept private and secure.

Most importantly, take time to review each program’s privacy settings with your child and have an honest discussion about behavior online. Even the strictest privacy controls can’t always prevent a student from disrupting class by making racist remarks in the chat or sharing the link or log-in credentials. I hate to load another burden on parents who are trying to work from home, but making sure your kid isn’t an online troll is partly on you.

Now more than ever, we are relying on technology to keep in touch with work, school, and friends and family. It hasn’t been — and will never be — perfect. Policymakers can help schools ensure that the technologies they use meet privacy and security standards by providing the resources for schools to employ experts in those fields.

As we all try to adjust to this new normal, we should embrace technologies that can add value to students’ educational experience, enhance our ability to work remotely and help us stay connected. But we must first make sure the appropriate safeguards are in place so privacy and security don’t fall by the wayside.

Jules Polonetsky is the CEO of the Future of Privacy Forum, a Washington, D.C.-based nonprofit that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Previously, he served as chief privacy officer at AOL and DoubleClick, as New York City consumer affairs commissioner, as a New York state legislator, as a congressional staffer and as an attorney.

A Landmark Ruling in Brazil: Paving the Way for Considering Data Protection as an Autonomous Fundamental Right

Authors: Bruno Ricardo Bioni and Renato Leite Monteiro


A historic ruling of the Brazilian Supreme Court from May 07, 2020 describes the right to data protection as an autonomous right stemming from the Brazilian Constitution. By a significant majority, 10 votes to 1, the Court halted the effectiveness of the Presidential Executive Order (MP[1] 954/2020) that mandated telecom companies to share subscribers’ data (e.g., name, telephone number, address) of more than 200 hundred million individuals with the Brazilian Institute of Geography and Statistics (IBGE), the country’s agency responsible for performing census research. More important than the decision itself was its reasoning, which paves the way for recognizing the protection of personal data as a fundamental right, independent of the right to privacy, that already receives such recognition, in a similar fashion to the Charter of Fundamental Rights of the European Union. This article summarizes the main findings of the ruling. First, (1) it will provide background on the role of the Brazilian Supreme Court and the legal effects of the ruling. It will then look into (2) the facts of the case, (3) the main findings of the Court, to conclude with (4) an analysis of what comes next for the Brazilian data protection and privacy law. 

  1. The role of the Supreme Court and its rulings in the Brazilian legal system

The Brazilian legal system resembles the federative structure of the country. Each state has its own lower courts and appeal bodies. At the federal level, there are also lower courts and appeal bodies with specific scope, such as labor law, cases with international effects or lawsuits against federal agencies. On top of that there are superior courts also with specific scope, such as specific violations of federal laws. 

At the top of the system sits the Brazilian Supreme Court (STF), a constitutional court of eleven Justices appointed by the President. With few exceptions, only extraordinary cases which directly violate the federal constitution, e.g. violation of fundamental rights, reach the court and its rulings can have binding effects upon all other levels of the Brazilian legal system, depending on the type of proceeding or effects granted by the Justices. 

One particular type of proceeding, known as Direct Action of Unconstitutionality (ADI), can be filed directly to the Supreme Court without the need to be discussed on lower-level courts or any other court in cases in which laws or norms directly violate the constitution. Rulings from this particular type of proceedings have nationwide binding effects for all entities of the three branches of the government and for private organizations. This was the type of proceeding filed at STF to discuss data protection as an autonomous fundamental right. Its ruling, therefore, will have overall binding effects. 

  1. Facts of the case and proceedings

Due to social distancing measures adopted in response to the COVID-19 pandemic, staff of the Brazilian Institute of Geography and Statistics (IBGE) is not able to visit citizens in order to conduct face-to-face interviews for the statistical research necessary to perform the national census, known as National Household Sample Survey (PNAD). This is the context behind the Presidential Executive Order 954/2020 (MP), which aimed to allow the IBGE to carry out its census research through telephone interviews. In other words, the declared purpose was to avoid a “statistical blackout”. 

The telephone interviews presupposed to collect data regarding various socioeconomic characteristics, such as population, education, work, income, housing, social security, migration, fertility, health, nutrition, among other topics that can be included in the research according to the information needs of Brazil, e.g., behavior data on the context of the pandemic. These interviews have always been conducted in person on a sample of 70 thousand households that were a statistical representation of the Brazilian population. However, the MP mandated that subscribers data of 200 million telecom clients should be shared with IBGE to perform the census. At a first glance, the first question brought to the Court’s attention was: why is personal data of so many citizens necessary to achieve the same purpose that used to be achieved in the past with fewer information?

 The issue was raised by four different political parties and the national bar association that filed five ADI upon the STF to discuss violations to the fundamental right to privacy, expressly granted by Art. 5, X, of the Federal Constitution, and to the right to secrecy of communications data, provided by Art. 5, XII. In previous case-law, the Court struggled to recognize stored data, such as subscribers data, as data protected by Art. 5, XII. Long standing precedents only granted such type of protection to data in motion, like ongoing telephone calls or data being transmitted. Acknowledging the need to update this understanding in light of new technologies and the impact that the misuse of data can have upon individuals and the society, another argument was presented: the need to recognize the right to protect personal data as an autonomous fundamental right.    

When the ADIs were filed, Justice Rosa Weber, Rapporteur of the case, granted an injunction order suspending the effects of the MP until it was further discussed by all Justices, identifying probable violations of the aforementioned constitutional rights, also arguing that despite the pandemic we are living in there was no public interest to share personal data of 200 million people to undergo the desired public policy.  

The trial in front of the eleven Justices started on May 6, with the participation of the parties’ lawyers and of amici curiae, including Data Privacy Brasil. The organisation filed an amicus brief and it was represented for the oral statement by its Director Bruno Ricardo Bioni (a co-author of this article), who spoke at length about the singular position of the right to protection of personal data, its status as an autonomous fundamental right, the many vices of the executive order and the current data protection landscape in Brazil, including the fact that the Brazilian General Data Protection Law (LGPD) is still in vacatio legis. He also reminded the Court that the national data protection authority, which will provide guidance and enforcement, is yet to be established. The English translation of the oral statement is available online.

  1.  Main findings of the Court

Historically, the STF has ruled solely based on the right to privacy and, most importantly, following the legal rationale of this fundamental right by which only private/confidential data should be protected. In the case RE 601314, the Court ruled that the Brazilian Federal Revenue Office (the Brazilian IRS) could have access to financial data from Banks without a court order. According to the Court, the data would remain confidential since only IRS’s staff would have access to the data, and they should abide by their severe informational fiduciary duties. Moreover, such data did not comprise sensitive (‘intimate’) information about individuals (e.g. religion, family relationships) and, therefore, requests to access data from the IRS would not disproportionately interfere on the right to private life. In the case RE 1055941, the same reasoning was adopted in order to grant similar data access request powers to the Public Prosecutor’s Office.  

The new precedent of the Supreme Court is such a remarkable shift of how the Court has been analyzing privacy and data protection because it changes the focus from data that is secret to data that is attributed to persons and might impact their individual and collective lives, regardless of whether they are kept in secrecy or not. There is no more irrelevant data. Justice Carmen Lucia argued that the world that we used to live in, where personal data was freely available in telephone catalogs without substantial risks, does not exist anymore. In this sense, the Brazilian Federal Constitution protects not only confidential data, but all and any type of data that can be deemed as an attribute of the human personality. The best example is the habeas data, a procedural constitutional right by which any person has the right to know what information organizations hold about them, as it was argued by Justice Luix Fuz, recalling a precedent of the Supreme Court (Extraordinary Appeal 673.707). The habeas data constitutional right, originally conferred only against public organizations, is a reminiscence of dictatorial times in Brazil and throughout Latin America, when information about citizens was kept in secrecy by the government and used to suppress the population. This provision can now be used to retrieve personal data held by private entities, as long as the databases at issue are of public interest, such as consumer protection databases managed by data brokers.  

If the Brazilian Constitution’s core value is the protection of human dignity, the protection it affords should go beyond the right to privacy in order to address other harmful challenges to an individual’s existence, and not only harms to personality rights. Today, humanity can be hacked not only through granting access to data regarding our intimacy, or aspects of human personality that must be locked under seven keys. Recalling the work of philosopher Yuval Harari, Justice Gilmar Mendes argued that due to technological progress, any type of data use that covers an extension of our individuality can pose a threat to human rights and fundamental freedoms. For this reason Justice Fux argued that just like the Charter of Fundamental Rights of the EU, the Brazilian Constitution should recognize the protection of personal data as an autonomous fundamental right, distinct from the right to privacy.

The Cambridge Analytica scandal was recalled by Justice Luiz Fux to contextualize the collective dimension of data protection rights. By describing the facts surrounding that case, the Justice highlighted how the misuse of personal data can have an impact that surpasses the individual and can affect the very foundations of democracies and influence electoral outputs. “We know today that the dissemination of this data is very dangerous”, affirmed Justice Fux, reminiscing of his term as President of the Superior Electoral Court, when he analyzed a case concerning lack of transparency and knowledge of how personal data is collected and used for political purposes, which can lead to unattended consequences that violate individual and collective rights.

If the mere processing of personal data can pose risks over the rights of individuals, it should be backed by appropriate safeguards in order to manage potential harmful effects. Thus, protection of personal data should receive the same protection conferred by the due process clause. It is the type of protection that takes into consideration that there are risks to public liberties associated with the mere processing of data that is linked to a person, as argued by Justice Gilmar Mendes, quoting Julie Cohen and her work on informational due process. 

“The use of personal data is inevitably an interference over the personal sphere of someone”, highlighted Justice Luis Roberto Barroso. As a  consequence, it should be proportionate by verifying if: 

  1. a) the purpose of the processing is clearly specified and legitimate; 
  2. b) the amount of data collected is limited to what is strictly necessary in relation to the purposes for which they are being processed; 
  3. c) information security measures are adopted to avoid unauthorized third-party access. 

Such proportionality test was the conclusion made by Justice Luis Roberto Barroso, which is clearly crafted after the traditional principles of protection of personal data. For the first time, a Judge of the Supreme Court has provided a ruling with such strong wording supporting fair information practice principles as components of an autonomous constitutional right to data protection. 

In addition, another landmark case was initiated by the STF two weeks later, with two judges already publishing their opinions. The main question in this second case is whether Internet platforms could implement encryption technology to the level that it could limit and even avoid the access of law enforcement authorities to data stored or in transit necessary to investigate crimes. Again, the proceeding ADPF 403, known as Request of Non-Compliance with Basic Constitutional Principles (ADPF), that has the same effects of ADIs, discussed the violation of the fundamental rights to privacy and secrecy of communication data. “Digital Rights are Fundamental Rights”: with this strong affirmation, Justice Edson Facchin, the rapporteur, gave his vote ruling out any interpretation of the constitution that would allow a court order to provide exceptional access to end-to-end encrypted message content or that, by any other means, weakens the cryptographic protection of internet applications. Justice Rosa Weber highlighted in her ruling that “the past 3 decades have been an arms race of protection technologies and privacy violations. The law cannot be ignored and must preserve the balance between privacy and the proper functioning of the State”. She also stated that “cryptography, as a technological resource, has taken on special importance in the implementation of human rights”.  

The case is still under ongoing proceedings and pending the votes of the other 9 Justices. Nonetheless, the two opinions already published are a breakthrough and show a steep change in the perception and understanding of Brazil’s highest court towards privacy and data protection rights. 

  1. A look to the future: the Brazilian General Data Protection Law and the amendment to the Brazilian Constitution

Despite this historical ruling, Brazil still lacks an institutional infrastructure to supervise and enforce data protection rights. The National Data Protection Authority was created by the Brazilian General Data Protection Law (“LGPD”), but is yet to be established. LGPD was approved in 2018, with an initial adaptation period of 18 months, which was soon amended to be increased by 6 months, leaving the effective date to August 2020. In parallel, a proposal to amend the Federal Constitution aims to include the protection of personal data in the list of fundamental rights. The proposal was unanimously approved by the Senate and by a special parliamentary commission of the House of Representative. Now it needs to be approved by two-thirds of this house. 

Now, due to the COVID-19 pandemic, a new bill and another executive order aim to postpone the entering into force of the LGPD to 2021. The bill was already voted by both the Senate and the House of Representatives and it is now to Presidential confirmation. If ratified as it is, the new law would keep the effective date to August 2020. However, it would amend LGPD to allow penalties and enforcement actions only to August 2021. In parallel, a presidential executive order already amended LGPD to change the effective date to May 2021. Nevertheless, this order needs to be approved by the Congress until July this year, what is unlikely to happen due to disputes between the two branches. That said, we can end up not knowing until July when the law will be in effect, one month before its original and possible date. On top of that, the National Data Protection Authority (ANPD), created in Dezember 2018, is yet to be established. Therefore, we can end up in twilight zone with no knowledge what may take place.

What is remarkable is that until the bill to amend the constitution is not adopted, which may not happen in the near future due to political unrest, this ruling of the Brazilian Supreme Court already paves the way to recognize the right to data protection in practice. 

 

About the authors:

Bruno Ricardo Bioni is a PhD candidate at University of São Paulo School of Law. He was a study visitor at Council of Europe/CoE and at the European Data Protection Board/EDPB. Founder of Data Privacy Brasil; Contact: [email protected].

Renato Leite Monteiro is a PhD candidate at the University of São Paulo School of Law. He was a study visitor at Council of Europe and actively participated in the discussions that led to the Brazilian General Data Protection Law. Founder of Data Privacy Brazil; Contact: [email protected]

Data Privacy Brasil  is a non-governmental organization with two operational branches: Data Privacy Brasil School, which provides training services and privacy courses, and the Research Association Data Privacy Brasil, which  focuses  on the research of the interconnection between protection of personal data, technology and fundamental rights. Data Privacy Brasil aims to improve privacy and data protection capacity-building for organizations active in Brazil. 


[1] MP- Brazilian abbreviation for Provisional Measure which is a legal act in Brazil through which the President can enact laws for 60 days without approval by the National Congress.

Endgame Issues: New Brookings Report on Paths to Federal Privacy Legislation

Authors: Stacey Gray, Senior Counsel (US Legislation and Policymaker Education), Polly Sanderson, Policy Counsel

 

This afternoon, The Brookings Institution released a new report, Bridging the gaps: A path forward to federal privacy legislation, a comprehensive analysis of the most challenging obstacles to Congress passing a comprehensive federal privacy law. The report includes a detailed range of practical recommendations and options for legislative text, the result of work with a range of stakeholders to attempt to draft a consensus-driven model privacy bill that would bridge the gaps between sharply divided stakeholders (read the full legislative text of that effort here). 

Among the legislative options for issues that will have to be addressed to pass a federal privacy law, the report explores: endgame issues (including preemption and enforcement), hard issues (such as limits on processing of data, civil rights, and algorithmic decision-making), solvable issues (such as covered entities, data security, and organizational accountability), and implementation issues (such as notice, transparency, and effective dates). 

Below, we discuss how the Brookings report addresses the two “endgame issues,” enforcement and preemption, in a path towards federal privacy legislation. We agree that these are endgame issues given that neither is optional–both topics must be addressed in any federal privacy law–and because they are issues on which lawmakers on both sides of the aisle (and more broadly, industry and privacy advocates) remain the most deeply divided.

Enforcement

Any meaningful federal law must contain provisions for its enforcement. However, there is considerable disagreement regarding how a privacy law should be enforced. Enforcement mechanisms can vary widely, from agency enforcement (by the Federal Trade Commission or another federal agency), to state law enforcement (such as Attorneys General), to various kinds of private rights of action (by which individuals can challenge violations in court).

A number of Senate and House Democrats and privacy advocates are proponents of a federal private right of action (usually in addition to federal agency enforcement). Many privacy advocates observe that private litigation has played an important role in enforcing federal civil rights laws. They have also expressed concerns that a federal agency will not have sufficient resources, political will, or incentives to adequately enforce the law, for example, when a violation involves harm to only one or a few individuals. Read more from advocates:

In contrast, most tech and business groups, and many Republicans, have expressed support for the more centralized enforcement authority of the Federal Trade Commission. Typically, they observe that data privacy harms can be difficult to define and measure, and argue that centralized enforcement would provide needed clarity and legal certainty to businesses and consumers around a consistent national standard. Business stakeholders also tend to cite concerns over contingency-based class action litigation, including risks to small businesses and financial incentives for meritless litigation. Read more from tech and business groups:

The Brookings proposal suggests a potential compromise: a tiered and targeted private right of action. Recovery would typically be limited to “actual damages,” but would impose statutory damages of up to $1000 per day for “wilful or repeated violations.” Specified harms under the duty of care would not be subject to a heightened standard, while other violations would require individuals to show a “knowing or reckless” violation to sue. Technical violations only give rise to suit if they were “wilful or repeated.” Importantly, potential plaintiffs would also be required to exercise a “right of recourse” before bringing a suit. This approach would give covered entities an opportunity to receive notice and cure the violation, and individuals a way to address privacy disputes outside the courts. 

Preemption

When Congress passes a federal privacy law, lawmakers must decide to what extent it will “preempt,” or nullify, current and future state and local privacy and data protection laws. Given the nature of modern data flows, most companies see clear benefit in uniform obligations across state lines and for consumers to have a core set of common rights. However, some argue that privacy can also have a uniquely local character, and note that state legislators have been at the forefront of many novel privacy protections, including in response to crises or rapid technological changes. 

The Brookings report proposes several potential compromises to attempt to bridge the gaps between the broad preemption in Senator Wicker (R-MS)’s staff discussion draft and the narrow preemption provisions in most Democratic bills, including Senator Cantwell’s Consumer Online Privacy Rights Act (COPRA). The report suggests preempting state laws only where they interfere with federal provisions specifically related to data collection, processing, transfers, and security. It also recommends that the Federal Trade Commission be authorized to preempt any state law inconsistent with the federal standard, and suggests a limited eight-year sunset clause on preemption.

Looking Ahead

We are optimistic that this new report from The Brookings Institution will be a source of thoughtful debate, and help stakeholders advance the conversation about these contentious issues. In addition to the difficult “endgame” issues of enforcement and preemption, the report identifies a detailed and wide range of other solvable issues having to do with implementation or operational issues on which there is broad agreement. As a result, it provides a highly practical starting point for stakeholders to engage around key issues that will need consensus.

The report observes that its recommendations “will not satisfy maxialists on either side of the debate” but that it may address “legitimate interests of divergent stakeholders.” Indeed, both sides have something to gain from striking a balance – and we agree that “both have something to lose from continued inaction and stalemate.”

Thermal Imaging as Pandemic Exit Strategy: Limitations, Use Cases and Privacy Implications

Authors: Hannah Schaller, Gabriela Zanfir-Fortuna, and Rachele Hendricks-Sturrup


Around the world, governments, companies, and other entities are either using or planning to rely on thermal imaging as an integral part of their strategy to reopen economies. The announced purpose of using this technology is to detect potential cases of COVID-19 and filter out individuals in public spaces who are suspected of suffering from the virus. Experts agree that the technology cannot directly identify COVID-19. Instead, it detects heightened temperature that may be due to a fever, one of the most common symptoms of the disease. Heightened temperature can also indicate a fever resulting from a non-COVID-19 illness or non-viral causes such as pregnancy, menopause, or inflammation. Not all COVID-19 patients experience heightened temperature, and individuals routinely reduce their temperatures through the use of common medication.

In this post, we (1) map out the leading technologies and products used for thermal imaging, (2) provide an overview of the use cases currently being considered for the use of thermal imaging, (3) review the key technical limitations of thermal scanning as described in scientific literature, (4) summarize the chief concerns articulated by privacy and civil rights advocates, and finally, (5) provide an in depth overview of regulatory guidance from the US, Europe and Singapore regarding thermal imaging and temperature measurement as part of the deconfinement responses, before reaching (6) conclusions.

Our main conclusions:

  1. Overview of Technologies Being Used

FLIR Systems, Inc., one of the largest makers of thermal imaging cameras, explains that the cameras detect infrared radiation and measure the surface temperatures of people and objects. They do this by measuring the temperature differences between objects. Thermal cameras can be used to sense elevated skin temperature (EST), a proxy for core body temperature, and thus identify people who may have a fever. This allows the cameras used to single out people with EST for further screening with precise tools similar to  an oral thermometer. As FLIR acknowledges, thermal cameras are not a replacement for such devices, which directly measure core body temperature.

FLIR explains that thermal cameras need to be calibrated in a lab, and be periodically recalibrated to ensure that their temperature readings match the actual temperatures of people and objects. FLIR recommends having cameras recalibrated annually. In addition to reading absolute temperatures, FLIR’s cameras have a ‘screening’ mode, where people’s temperatures are measured relative to a sampled average temperature (SAT) value. This value is an average of the temperatures of ten randomly chosen people at the testing location. The camera user then sets an “alarm temperature” at 1°C to 3°C greater than the SAT value, and the camera displays an alarm when it detects someone in this zone. As FLIR notes, a SAT value can be more accurate than absolute temperatures because it accounts for “many potential variations during screening throughout the day, including fluctuations in average person temperatures due to natural environmental changes, like ambient temperature changes.” 

The accuracy of a thermal camera’s reading is affected by several factors, including the camera’s distance from the target. FLIR suggests that the camera should be as close to the target as possible, and telephoto lenses might be appropriate for longer-range readings. The camera’s functions and settings can affect its accuracy as well and need to be appropriately configured.

Thermal imaging can be paired with various other technologies. Draganfly, Inc., a Canadian drone company, has mounted thermal sensors on what it calls ‘pandemic drones’ for broad-scale aerial surveillance. The drones are also equipped with computer vision that can sense heart and respiratory rate, detect when someone coughs or sneezes, and measure how far apart people are from one another to enforce social distancing. Reportedly, it can do all of this through a single camera from a distance of 160 feet. In a video interview, Draganfly’s CEO stated that the sensors can even distinguish between different kinds of coughing.

Thermal imaging has also been paired with facial recognition by some companies based in China, including SenseTime and Megvii. Chinese AI startup Rokid has mounted a camera on a pair of glasses that uses facial recognition and thermal imaging to identify people, measure their temperature, and record this information. In Thailand, thermal imaging has been integrated into the existing biometric-based border control system, which identifies travelers using fingerprint scans and facial recognition.

While many US locations still perform temperature screenings with handheld thermometers, interest in thermal imaging cameras is growing rapidly. Several thermal imaging companies claim to have sold thousands of units to US customers since the COVID-19 outbreak began. Thermal cameras are appealing as an exit strategy solution due to some promised advantages over handheld thermometers. They claim to detect the temperatures of many people at once, whereas handheld thermometers can only test one person at a time. They also claim to measure temperatures from a distance as people move. Theoretically, these abilities would lessen or eliminate the need for people to wait in line to have their temperatures taken, which in turn also reduces the risk of COVID-19 transmission. All of these promises should be weighed together with the limitations of the technology along with the implications to privacy and other civil rights. 

  1. Current Use Cases

Airports. Airports across the world are using thermal cameras to screen travelers. Some countries, including China, Japan, South Korea, Singapore, Canada, and India, began using them in 2002-2003 (in response to SARS) or 2009 (in response to swine flu) and continue to use them in response to COVID-19. Some airports in these countries have installed additional cameras in recent months. Other countries, like Italy, have recently begun using thermal imaging at airports for the first time. Rome’s Fiumicino Airport is testing helmets equipped with thermal cameras, worn by its staff, to detect travelers’ temperatures. Other countries have resisted this technology. In the UK, Public Health England decided that British airports will not use thermal cameras, although the CEO of Heathrow Airport was in favor of doing so. US airports who are not using thermal cameras, are evaluating the possibility of doing so. Instead, screening procedures include taking temperatures with a handheld thermometer, looking for signs of illness, and requiring travelers to fill out a questionnaire. In response to plans of the US Department of Homeland Security to check commercial airline passengers’ temperatures, a member of the Privacy and Civil Liberties Oversight Board is pressing the agency for more details, warning the global pandemic “is not a hall pass to disregard the privacy and civil liberties of the traveling public.”  

Transportation. Some Chinese cities are equipping public transportation centers with cameras that combine thermal imaging and facial recognition. Wuhan Metro transport hubs are being equipped with cameras from Guide Infrared, and Beijing railway stations are adding cameras from Baudi and Megvii. In addition, a Chinese limousine service has installed thermal cameras in its vehicles to monitor drivers and passengers. In Dubai, police are using thermal imaging and facial recognition to monitor public transport users via cameras mounted on ‘smart helmets.’

Employee Screening. Companies are using thermal cameras to screen employees for fevers. This is done broadly in China and South Korea at entrances to offices and major buildings, often using combined thermal imaging and facial recognition. Elsewhere, thermal cameras without facial recognition are increasingly used. For example, Brazilian mining company Vale SA is installing thermal cameras to screen employees entering buildings, mines, and other areas. Indian Railways installed a thermal camera from FLIR at an office entrance, among other COVID-19 mitigation measures.

Some US companies and organizations are also screening employees with thermal cameras, including Tyson Foods; Amazon, which is screening warehouse workers; and the VA Medical Center in Manchester, New Hampshire, which is scanning staff and patients. It appears that most US companies that have begun screening employees for fevers, like Walmart and Home Depot, are using hand-held thermometers

Public Facing Offices. As stated above, thermal cameras read skin temperature, and are not a substitute for temperature-taking methods that measure core body temperature. However, some locations are making decisions based solely on thermal camera readings. For example, in Brasov, Romania, a city office installed thermal cameras at its entrances, automatically denying entrance to  anyone with a temperature of over 38°C. Because thermal camera readings do not always match core body temperatures, there is a risk that people without fevers will be unfairly impacted by reliance solely on thermal camera temperature readings.

Customer and Patient Screening. Thermal cameras are growing in popularity among US businesses and hospitals as a way to screen customers and patients, respectively. A grocery store chain in the Atlanta, Georgia area is screening incoming customers using FLIR cameras. Customers with temperatures of 100.4°F or higher are pulled aside by an employee and given a flyer asking them to leave, in an attempt to handle the situation discreetly. Wynn Resorts in Las Vegas plans to screen guests at its properties and require anyone who registers a temperature of 100.4°F or higher to leave. Texas businesses and hospitals are also starting to adopt thermal cameras. Hospitals elsewhere are following this trend – for example, Tampa General Hospital in Florida now screens patients with a thermal camera system made by care.ai, a healthcare technology company.

Public Surveillance. Thermal cameras allow authorities and businesses to screen large numbers of people in real-time, making them ideal for monitoring public areas. In China, thermal cameras with facial recognition surveil many public places; some systems can even notify police of people who are not wearing masks. In several cities in Zhejiang province, police and other officials are wearing Rokid’s thermal glasses to monitor people in public spaces like parks and roadways. These glasses combine thermal imaging with facial recognition, as they also record photos and videos. Thermal sensing drones are also being used in numerous cities.

Use of thermal imaging has grown outside of Asia, too. In India, a thermal camera provider is considering installing its cameras around Delhi, both in public spaces and in businesses. Huawei has also offered thermal cameras as a solution to monitoring COVID-19 in India. Outside of Asia, in New Zealand, thermal cameras, originally developed for pest control, are being reworked to monitor for fevers in public places and are in use by some businesses. Police, in some areas of the UK, use thermal cameras to spot people breaking social distancing orders at night. The Quassim region of Saudi Arabia is monitoring the public with drones carrying thermal cameras.

It is uncommon in the US to use thermal cameras as a tool for public surveillance. However, police in Westport, Connecticut tested a Draganfly ‘pandemic drone’ to be used to measure temperatures and enforce social distancing, back in April. Westport police use drones for other purposes, but not for this kind of mass-monitoring. The program was quickly dropped when it was met with criticism by the public and the American Civil Liberties Union (ACLU) of Connecticut, which criticized the effectiveness of the drones and raised privacy concerns. Other cities that were also interested in Draganfly’s drones, like Los Angeles, Boston, and New York, may still be considering them.

In addition to drones, some US entities are reportedly considering Rokid’s thermal glasses. The company is discussing the sale of its glasses with various US businesses, hospitals, and law enforcement departments.

  1.   Technical and Other Limitations

In general, thermal imaging is used in regulated clinical settings with validated clinical protocols to diagnose or detect illness and triage patients. The use of specific thermal imaging devices to detect possible cases of COVID-19 or for other medical purposes, in general, requires US Food and Drug Administration (FDA) approval. In such cases, thermal imaging technologies would be considered by the FDA as medical devices. Concerning labeling for thermal imaging technologies, the FDA stated:

“When evaluating whether these products are intended for a medical purpose, among other considerations, FDA will consider whether: 

1) They are labeled or otherwise intended for use by a health care professional; 

2) They are labeled or otherwise for use in a health care facility or environment; and 

3) They are labeled for an intended use that meets the definition of a device, e.g., body temperature measurement for diagnostic purposes, including such use in non-medical environments (e.g., airports).”

The use of thermal imaging in non-medical environments, however, warrants the necessity to explore the technical limitations of using such technologies in high-traffic areas, like airports, for non-diagnostic yet medical purposes. 

The fact that fever or body temperature alone can be a poor indicator of viral infection or contagion complicates the validity of thermal scanning for COVID-19 surveillance. If not most of the time, fevers can be masked with over-the-counter or unrestricted treatments, such as non-steroidal anti-inflammatory drugs, that can alleviate signs of fever for up to four to six hours depending on the severity or stage of the condition. Non-infectious conditions, such as pregnancy, menopause, or inflammation, however, might also cause elevated temperature, which can render thermal scanning as highly sensitive but non-specific to any particular condition. For example, according to Johns Hopkins Medicine, hot flashes are the most common symptom of menopause, affecting 75% of all women in this stage, for up to two years. Also, confounding factors like inconsistencies or variations in viral response or strain can render thermal scanning insufficient for detecting specific types of infectious diseases like respiratory viruses.

Scientific literature suggests that reliance on public thermal scanning to detect fever is concerning from an ethical standpoint, and, given its technical limitations, is not a reliable disease surveillance strategy to support phased reopening. In a study evaluating the utility of thermal scanning in airports, researchers concluded that because the technology would be applied in a public setting unbeknownst to public passengers, controversy and complexity around matters of opt-in/out consent are inevitable. Studies have shown that thermal imaging technology can reasonably correlate core temperatures with influenza infection. However, its technical limitations render it insufficient to detect fever in settings where several individuals are moving in different directions at once, like in public settings with random, high pedestrian traffic. FDA labeling requirements are consistent with this limitation, mandating that labels acknowledge that the technology “should be used to measure only one subject’s temperature at a time.” Therefore, thermal scanning protocols would likely require structured, individual-level assessments along with non-compulsory and non-coercive (freely given) consent to be somewhat successful and feasible within public health surveillance settings that adhere to ethical standards of personal autonomy.

  1. Privacy and Civil Rights Advocates’ Concerns

Privacy and civil rights advocates in the US have raised concerns about the potential consequences of using thermal imaging such as discrimination and loss of opportunity. Since thermal imaging cannot distinguish fevers caused by COVID-19 from other causes of high body temperature, equating raised body temperature with the virus would lead to many people falsely being identified as COVID-19 risks and facing the associated downsides of that label, including discrimination. The Electronic Frontier Foundation (EFF) points out that thermal cameras are surveillance devices that can “chill free expression, movement, and association; aid in targeting harassment and over-policing of vulnerable populations; and open the door to facial recognition.” In light of the questionable effectiveness of thermal cameras, EFF cautions against using them to monitor the public at large. The ACLU of Connecticut criticized Draganfly’s drones as “privacy-invading,” and urged officials only to adopt surveillance measures against the spread of COVID-19 that are “advocated for public health professionals and restricted solely for public health use.” These concerns are also expressed in the context of fears that surveillance technologies adopted during the pandemic may remain long after their original purpose has been fulfilled.

In a recent White Paper on “Temperature Screening and Civil Liberties during an Epidemic,” the ACLU recommended that temperature screening “should not be deployed unless public health experts say that it is a worthwhile measure notwithstanding the technology’s problems. To the extent feasible, experts should gather data about the effectiveness of such checks to determine if the tradeoffs are worth it.” The ACLU further recommended that people should know when their temperature is going to be taken and  that “standoff thermal cameras should not be used.” In addition, “no action concerning an individual should be taken based on a high reading from a remote temperature screening device unless it is confirmed by a reading from a properly operated clinical grade device, and provisions should be made for those with fevers not related to infectious illness.”

  1. Regulatory Responses

In the US, regulatory responses to taking one’s temperature in non-healthcare services scenarios are primarily stemming from anti-discrimination statutory obligations. The Equal Employment Opportunity Commission (EEOC) recently revised its rules regarding the Americans With Disabilities Act in the context of a pandemic. The revisions allow employers to take employees’ temperatures during COVID-19. They also allow employers to take job candidates’ temperatures after making a conditional offer, as well as withdraw a job offer if a newly hired employee is diagnosed with COVID-19. However, the guidance does not distinguish between manual temperature checks and thermal scanning cameras. 

This distinction drives many of the regulatory responses in Europe, where multiple Data Protection Authorities (DPAs) have published guidance on checking temperatures of employees, but also of customers or pedestrians. One of the regulators that draws a clear distinction between the two types of measuring temperature is the CNIL (the French DPA). According to the CNIL, “the mere verification of temperature through a manual thermometer (such as, for example, the contactless thermometers using infrared) at the entrance of a place, without any trace being recorded, and without any other operation being effectuated (such as taking notes of the temperature, adding other information etc.), does not fall under data protection law”. 

However, things fundamentally change when thermal scanning through cameras is involved. In this sense, the CNIL issued a prohibition: “According to the law (in particular Article 9 [of the General Data Protection Regulation] GDPR), and in the absence of a law that expressly provides this possibility, it is forbidden for employers to: 

The prohibition of these two types of temperature measurement echoes guidance issued by the French Ministry of Labor in its “National Protocol for Deconfinement Measures.” Before including a prohibition for temperature measurement with the use of cameras, the Protocol relies on the findings of the High Council for Public Health that the COVID-19 infection may be asymptomatic or barely symptomatic, and that “fever is not always present in patients.” It also recalls that a person with COVID-19 can be infectious “up to 2 days before the onset of clinical signs,” and that “bypass strategies to this control are possible by taking antipyretics.” The Ministry of Labor concludes that “taking temperature to single out a person possibly infected would be falsely reassuring, with a non-negligible risk of missing infected persons.” 

The Spanish DPA takes the position that taking the temperatures of individuals to determine their ability to enter the workplace, commercial spaces, educational institutions, or other establishments, amounts to processing of personal data without making any distinction in its guidance between manually held thermometers and thermal imaging. It seems to focus on the purposes for which individual measurement of temperature is used when making this assessment. The Spanish DPA highlights in its detailed guidance that “this processing of personal data amounts to a particularly severe interference in the rights of those affected. On one hand, because it affects data related to health, not only because the value of the body temperature is  data related to health by itself, but also because, as a consequence of that value it is assumed that a person suffers or not from a disease, in this case a coronavirus infection.” 

The Spanish DPA also notes that the consequences of a possible negation to enter a specific space may have a significant effect on the person concerned. Therefore, it urges organizations to consider, among other measures, properly informing workers, visitors or clients about temperature monitoring. They should also allow those individuals with a higher than normal temperature to object to a decision that impedes their access in a specific place in front of personnel who are qualified to assess possible alternative reasons for the high temperature and can allow access where justified. It is also relevant to note that, when it comes to lawful grounds for processing, the Spanish DPA does not deem consent and legitimate interests as appropriate lawful grounds. The processing needs to be based either in a legal obligation or in the interest of public health, ensuring that the additional conditions required by these two lawful grounds are met.

The Italian DPA (Garante) takes the position that taking one’s “body temperature in real time, when associated with the data subject’s identity, is an instance of processing personal data.” As a consequence of this fact, the DPA states that “it is not permitted to record the data relating to the body temperature found; conversely, it is permitted to record the fact that the threshold set out in the law is exceeded, and recording is also permitted whenever it is necessary to document the reasons for refusing access to the workplace.” This rule applies in an employment context. Where the body temperature of customers or occasional visitors is checked, “it is not, as a rule, necessary to record the information on the reason for refusing access, even if the temperature is above the threshold indicated in the emergency legislation.” 

It is important to highlight here that in the case of Italy, there is special legislation adopted for managing the COVID-19 pandemic that mandates temperature taking by “an employer whose activities are not suspended (during the lockdown – n.)” to comply with the measures for the containment and management of the epidemiological emergency. This special legislation acts as a lawful ground for processing. Once the legislation expires or becomes obsolete, taking the temperature of employees or other individuals entering a workplace will likely remain without a lawful ground. According to the Garante, another instance where special emergency legislation allows for temperature measurement is in the case of airport passengers. It should also be noted that neither the Garante’s guidance, nor the special legislation mentioned above make a distinction between manual temperature taking and the use of thermal cameras. 

By contrast, the Belgian DPA takes the position that “the mere capturing of temperature” is not a processing of personal data, without distinguishing between manual temperature taking and the use of thermal cameras. Accordingly, the DPA issued very brief guidance stating that “if taking the temperature is not accompanied by recording it somewhere or by another type of processing, the GDPR is not applicable.” It nonetheless reminds employers that all the measures they implement must be in accordance with labor law as well as the guidance of competent authorities. 

The Dutch DPA warned controllers that want to measure the temperature of employees or visitors about the uncertainty of detecting COVID-19 by merely detecting a fever. It also advised that “taking temperatures is not simply allowed. Usually you use this to process medical data. And this falls under the GDPR.” According to the Dutch DPA, “the GDPR applies in this situation because you not only measure someone’s temperature, but  you also do something with this medical information. After all, you don’t measure for nothing. Your goal is to give or deny someone access. To this end, this person’s temperature usually has to be passed on or recorded somewhere so that, for example, a gate can open to let someone in.” In further guidance on the  question of whether temperature measurement falls under the GDPR, the DPA explained that “a person’s temperature is personal data. (…) The results (of temperature measurement – n.) will often have to be passed on and registered somewhere to allow or deny someone access. Systems in which gates open, which give a green light or which do something automated on the basis of the measurement data are also protected by the GDPR.” The DPA also states that even when the GDPR is not applicable in those cases where the temperature is merely read with no further action, a breach of the right to privacy or of other fundamental rights might be at issue: “The protection of other fundamental rights, such as the integrity of the body, may also be expressly at stake. Depending  on how it is set up, only measuring temperature can indeed be illegal.”

The UK Information Commissioner’s Office (ICO) warns organizations that want to deploy temperature checks or thermal cameras on site that “when considering the use of more intrusive technologies, especially for capturing health information, you need to give specific thought to the purpose and context of its use and be able to make the case for using it. Any monitoring of employees needs to be necessary and proportionate, and in keeping with their reasonable expectations.” However, it does seem to allow such practices in principle, but only after a Data Protection Impact Assessment is conducted. The ICO states that it worked with the Surveillance Camera Commissioner to update a DPIA template for uses of thermal cameras. “This will assist you thinking before considering the use of thermal cameras or other surveillance,” the ICO adds. 

The Czech DPA also adopted specific guidance for the use of thermal cameras and  temperature screening, taking the position that data protection law is applicable only when “the employer intends to record the performed measurements and further work with data related to high body temperature in conjunction with other data enabling the identification of the person whose body temperature is being taken.” As opposed to the Spanish DPA, which found that legitimate interests cannot be a lawful ground for processing such data, the Czech DPA suggests that employers can process the temperature of their employees on the basis of legitimate interests, paired with one of the acceptable uses for processing health data under Article 9(2). The DPA further advises that the necessity of such measures needs to be continuously assessed and warns that “measures which may be considered necessary in an emergency situation will be unreasonable once the situation returns to normal.”

In Germany, the Data Protection Commissioner of Saarland has already started an investigation into a supermarket which installed thermal cameras to select customers with normal temperatures for its premises, after declaring to the media that “the filming was used to collect personal data, including health data, in order to identify a potential infected person,” and this measure breached the GDPR and the right to informational self-determination. According to media reports, the supermarket decided to suspend the thermal scanning measure. In addition, the DPA of Rhineland-Phalz notes in official guidance that “the mere fact that an increased body temperature is recorded does not automatically lead to the conclusion that COVID-19 is present. Conversely, an already existing coronavirus disease does not necessarily have to be identified by an increased body temperature. Therefore, the suitability of the body temperature measurement is in doubt.” The DPA suggests that alternative measures should be implemented by employers to comply with their duty of care towards the health of employees, such as working from home whenever possible or encouraging employees to seek medical advice at the first signs of disease. The DPA of Hamburg is more precise and clearly states that “neither the use of thermal imaging cameras nor digital fiber thermometers to determine symptoms of illness is permitted” to screen persons to enter shops or other facilities. This can only be offered to individuals as a “voluntary service.” 

It seems that all DPAs which issued guidance on this matter have determined that  thermal scanning and temperature management are particularly intrusive measures. But their responses vary, from a clear prohibition to use thermal cameras for triaging people (CNIL, Hamburg DPA), to allowing thermal scanning in a quite restricted way (Spanish DPA), to possibly allowing video thermal scanning by default as long as a DPIA is conducted (UK ICO), to making a point about hand-handled temperature measurement as not falling under data protection law (Dutch DPA, Belgian DPA, Czech DPA, CNIL), to not making any differentiation between hand-handled temperature measurement and video thermal scanning when allowing such measures (Italian DPA). The European Data Protection Board (EDPB) has not yet issued specific guidance on the use of thermal cameras or, generally, on the measurement of temperature. Given the diversity in approaches taken by European DPAs, it may be necessary for the EDPB to provide harmonized guidance. 

Elsewhere in the world, the Singaporean Personal Data Protection Commission advises organizations that “where possible, deploy solutions that do not collect personal data. For instance, your organisation may deploy temperature scanners to check visitors’ temperature without recording their temperature readings, or crowd management solutions that only detect or measure distances between human figures without collecting facial images.”

  1. Conclusion

This article provides a comprehensive overview of the use cases for thermal scanning cameras, their technical and medical limitations, the civil rights concerns surrounding them, and the up-to-date regulatory responses to their use in the fight against the spread of COVID-19 as countries are entering the first “deconfinement” stage in this pandemic. Organizations considering the deployment of temperature measuring as part of their exit strategies should carefully analyze whether the benefits of such measures outweigh the risks of discrimination, loss of opportunity, and the risks to the civil rights of the individuals who will be subjected to this type of screening en masse. Advice from public health authorities, public health specialists, and other regulators should always be part of this assessment, as well as consulting individuals who will be subjected to these measures as part of learning about their legitimate expectations when it comes to safety in the current stage of the pandemic versus other rights.   

The authors thank Charlotte Kress for her research support. 

For any inquiries, the authors can be contacted at [email protected] or [email protected]