Blog Summary: Ethical Concerns and Challenges in Research using Secondary Data

Digital data is a strategic asset for business. It is also an asset for researchers seeking to answer socially beneficial questions using company held data. Research using secondary data introduces new challenges and ethical concerns for research administrators and research ethics committees, like IRBs. 

FPF Senior Researcher, AI & Ethics, Dr. Sara Jordan, analyzes some of these concerns in the post Research Using Secondary Data: New Challenges and Novel Resources for Ethical Governance published on Ampersand: The PRIM&R Blog

In the informative analysis, Dr. Jordan examines five key points:

Public Responsibility in Medicine and Research (PRIM&R)’s blog Ampersand is a discussion space for thoughtful conversations on research ethics and oversight. Learn more here.

Future of Privacy Forum Launches its Asia-Pacific Office Led by Dr. Clarisse Girot

The Future of Privacy Forum’s Asia-Pacific office launches today in Singapore under the leadership of Dr. Clarisse Girot. As Director of the FPF Asia-Pacific office, she is responsible for developing and implementing FPF’s strategy in the world’s biggest and most populated region.

We have asked Dr. Girot to share her thoughts for FPF’s blog on Asia’s role in the global conversation on data protection law and policy, its key characteristics, as well as the balanced role FPF Asia will hold in the region.

1) What are the key characteristics of data protection law and policy developments in Asia? 

Dr. Girot: The first characteristic of these developments is their intensity, and legal uncertainty as a result: legislative and regulatory activity in data privacy in Asia is happening at an unprecedented pace. The adoption of EU GDPR has been an essential driver behind these developments, but endemic factors also play a role: major data breaches have occurred in most countries, and a series of privacy controversies have confronted citizens, companies, and governments, this region just like elsewhere. At the same time, the massive development of the digital economy, the emergence of unicorns, and local tech champions have sharpened the ambitions of many countries to enhance their status as tech powers, data hubs, etc. 

Major jurisdictions have thus proceeded to a general upgrade of pre-existing laws, including some of the oldest data protection frameworks in the world (New Zealand, South Korea, or Japan, for instance), but also more recent, say “second generation” laws like Singapore’s PDPA. China’s National People’s Congress is expected to adopt the Personal Information Protection Law (PIPL) before the end of the year, while the process of adopting the Data Protection Bill of India could be concluded, after several years of waiting. These texts will be groundbreaking, if only because China and India are the two most populous countries in the world.

The second characteristic, amplified by the first, is the fragmentation of legal systems, with variations that make any regional compliance effort particularly difficult. These variations also create gaps in protection that are prejudicial to individuals, as well as gaps in the capacities of regulators to cooperate. 

The third is that APAC jurisdictions have very diverse cultural norms and economic priorities. Some laws are more explicitly grounded in human rights considerations, some are focused on “digital sovereignty” and national security, and many others are more broadly concerned with advancing consumer protection and building confidence in the digital economy. In practice, recently the three objectives appeared to be more and more often mixed. The Covid-19 crisis has added an additional layer of uncertainty in this region which is at the same time extraordinarily dynamic, contrasted, and therefore very complex to apprehend.

2) How do you see Asia’s role in the global debate on data protection law and policy? 

Dr. Girot: It is always difficult to generalize when speaking of “Asian developments” as we speak of European or US developments, because of the great contrasts which exist from one country to another, moreover in the largest region of the world. It is certain, on the other hand, that Asia in general, and certain countries in particular, will take an increasingly important place in the global privacy debate.

This evolution will happen because of a simple numerical factor. But also because the scope of privacy discussions has, for historical reasons, been defined essentially in “the West”, and many countries in the region now want to contextualize privacy developments and adapt pre-existing models to make them their own. The majority of jurisdictions in the region will find it difficult to subscribe to an approach focused on privacy as a fundamental human right, but an approach exclusively focused on trade and consumer protection will not suit them either. Cultural differences, but even more important differences in economic development will necessarily lead the countries of the region to weigh in on global developments.

3) How can FPF Asia help key stakeholders in the region with finding a balance between the utility of tech and data, on one hand, and the need to protect rights and interests of communities and individuals, on the other hand?

Dr. Girot: The creation of FPF Asia is incredibly timely. In a region crossed by different currents, and in a high voltage world, it can take the discrete but indispensable role of a transformer and “universal adapter.” The needs and expectations are immense, for there is today in the region no platform for cooperation that is at the same time informal, neutral, and endowed with both the expert background and the capacities necessary to build an inclusive regional dialogue on privacy and data protection.

My 4-year experience in a comparable position at the Asian Business Law Institute (ABLI) has taught me that cooperation can take many different forms, depending on the current priorities of regulators and countries. They may be information sessions and informal exchanges with regulators, convening different stakeholders to discuss cutting edge data protection issues, providing expertise in emerging technology in a way that is useful for policymakers, facilitating peer-to-peer exchanges for businesses active in the region, providing independent analyses and reports on key policy topics… Asia’s skies are the limit! 

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Director for Global Privacy, at [email protected].

Stay updated: Sign up for FPF Asia-Pacific email alerts.

FPF Launches Asia-Pacific Region Office, Global Data Protection Expert Clarisse Girot Leads Team

The Future of Privacy Forum (FPF) has appointed Clarisse Girot, PhD, LLM, an expert on Asian and European privacy legislation, to lead its new FPF Asia-Pacific office based in Singapore as Director. This new office expands FPF’s international reach in Asia and complements FPF’s offices in the U.S., Europe, and Israel, as well as partnerships around the globe.
 
Dr. Clarisse Girot is a privacy professional with over twenty years of experience in the privacy and data protection fields. Since 2017, Clarisse has been leading the Asian Business Law Institute’s (ABLI) Data Privacy Project, focusing on the regulations on cross-border data transfers in 14 Asian jurisdictions. Prior to her time at ABLI, Clarisse served as the Counsellor to the President of the French Data Protection Authority (CNIL) and Chair of the Article 29 Working Party. She previously served as head of CNIL’s Department of European and International Affairs, where she sat on the Article 29 Working Party, the group of EU Data Protection Authorities, and was involved in major international cases in data protection and privacy.
 
“Clarisse is joining FPF at an important time for data protection in the Asia-Pacific region. The two most populous countries in the world, India, and China, are introducing general privacy laws, and established data protection jurisdictions, like Singapore, Japan, South Korea, and New Zealand, have recently updated their laws,” said FPF CEO Jules Polonetsky. “Her extensive knowledge of privacy law will provide vital insights for those interested in compliance with regional privacy frameworks and their evolution over time.”
 
FPF Asia-Pacific will focus on several priorities by the end of the year including hosting an event at this year’s Singapore Data Protection Week. The office will provide expertise in digital data flows and discuss emerging data protection issues in a way that is useful for regulators, policymakers, and legal professionals. Rajah & Tann Singapore LLP is supporting the work of the FPF Asia-Pacific office.
 
“The FPF global team will greatly benefit from the addition of Clarisse. She will advise FPF staff, advisory board members, and the public on the most significant privacy developments in the Asia-Pacific region, including data protection bills and cross-border data flows,” said Gabriela Zanfir-Fortuna, Director for Global Privacy at FPF. “Her past experience in both Asia and Europe gives her a unique ability to confront the most complex issues dealing with cross-border data protection.”
 
As over 140 countries have now enacted a privacy or data protection law, FPF continues to expand its international presence to help data protection experts grapple with the challenges of ensuring responsible uses of data. Following the appointment of Malavika Raghavan as Senior Fellow for India in 2020, the launch of the FPF Asia-Pacific office further expands FPF’s international reach.
 
Dr. Gabriela Zanfir-Fortuna leads FPF’s international efforts and works on global privacy developments and European data protection law and policy. The FPF Europe office is led by Dr. Rob van Eijk, who prior to joining FPF worked at the Dutch Data Protection Authority as Senior Supervision Officer and Technologist for nearly ten years. FPF has created thriving partnerships with leading privacy research organizations in the European Union, such as Dublin City University and the Brussels Privacy Hub of the Vrije Universiteit Brussel (VUB). FPF continues to serve as a leading voice in Europe on issues of international data flows, the ethics of AI, and emerging privacy issues. FPF Europe recently published a report comparing the regulatory strategy for 2021-2022 of 15 Data Protection Authorities to provide insights into the future of enforcement and regulatory action in the EU.
 
Outside of Europe, FPF has launched a variety of projects to advance tech policy leadership and scholarship in regions around the world, including Israel and Latin America. The work of the Israel Tech Policy Institute (ITPI), led by Managing Director Limor Shmerling Magazanik, includes publishing a report on AI Ethics in Government Services and organizing an OECD workshop with the Israeli Ministry of Health on access to health data for research.
 
In Latin America, FPF has partnered with the leading research association Data Privacy Brasil, provided in-depth analysis on Brazil’s LGPD privacy legislation and various data privacy cases decided in the Brazilian Supreme Court. FPF recently organized a panel during the CPDP LatAm Conference which explored the state of Latin American data protection laws alongside experts from Uber, the University of Brasilia, and the Interamerican Institute of Human Rights.
 

Read Dr. Girot’s Q&A on the FPF blog. Stay updated: Sign up for FPF Asia-Pacific email alerts.
 

FPF and Leading Health & Equity Organizations Issue Principles for Privacy & Equity in Digital Contact Tracing Technologies

With support from the Robert Wood Johnson Foundation, FPF engaged leaders within the privacy and equity communities to develop actionable guiding principles and a framework to help bolster the responsible implementation of digital contact tracing technologies (DCTT). Today, seven privacy, civil rights, and health equity organizations signed on to these guiding principles for organizations implementing DCTT.

“We learned early in our Privacy and Pandemics initiative that unresolved ethical, legal, social, and equity issues may challenge the responsible implementation of digital contact tracing technologies,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “So we engaged leaders within the civil rights, health equity, and privacy communities to create a set of actionable principles to help guide organizations implementing digital contact tracing that respects individual rights.”

Contact tracing has long been used to monitor the spread of various infectious diseases. In light of COVID-19, governments and companies began deploying digital exposure notification using Bluetooth and geolocation data on mobile devices to boost contact tracing efforts and quickly identify individuals who may have been exposed to the virus. However, as DCTT begins to play an important role in public health, it is important to take necessary steps to ensure equity in access to DCTT and understand the societal risks and tradeoffs that might accompany its implementation today and in the future. Governance efforts that seek to better understand these risks will be better able to bolster public trust in DCTT technologies. 

“LGBT Tech is proud to have participated in the development of the Principles and Framework alongside FPF and other organizations. We are heartened to see that the focus of these principles is on historically underserved and under-resourced communities everywhere, like the LGBTQ+ community. We believe the Principles and Framework will help ensure that the needs and vulnerabilities of these populations are at the forefront during today’s pandemic and future pandemics.”

Carlos Gutierrez, Deputy Director, and General Counsel, LGBT Tech

“If we establish practices that protect individual privacy and equity, digital contact tracing technologies could play a pivotal role in tracking infectious diseases,” said Dr. Rachele Hendricks-Sturrup, Research Director at the Duke-Margolis Center for Health Policy. “These principles allow organizations implementing digital contact tracing to take ethical and responsible approaches to how their technology collects, tracks, and shares personal information.”

FPF, together with Dialogue on Diversity, the National Alliance Against Disparities in Patient Health (NADPH), BrightHive, and LGBT Tech, developed the principles, which advise organizations implementing DCTT to commit to the following actions:

  1. Be Transparent About How Data Is Used and Shared. 
  1. Apply Strong De-Identification Techniques and Solutions. 
  1. Empower Users Through Tiered Opt-in/Opt-out Features and Data Minimization. 
  1. Acknowledge and Address Privacy, Security, and Nondiscrimination Protection Gaps. 
  1. Create Equitable Access to DCTT. 
  1. Acknowledge and Address Implicit Bias Within and Across Public and Private Settings.
  1. Democratize Data for Public Good While Employing Appropriate Privacy Safeguards. 
  1. Adopt Privacy-By-Design Standards That Make DCTT Broadly Accessible. 

Additional supporters of these principles include the Center for Democracy and Technology and Human Rights First.

To learn more and sign on to the DCTT Principles visit fpf.org/DCTT.

Support for this program was provided by the Robert Wood Johnson Foundation. The views expressed here do not necessarily reflect the views of the Foundation.

The Spectrum of AI: Companion to the FPF AI Infographic

In December of 2020, FPF published the Spectrum of Artificial Intelligence – An Infographic Tool, designed to visually display the variety and complexity of Artificial Intelligence (AI) systems, the fields this science is based on, and a small sample of the use cases these technologies support for consumers. Today, we are releasing the white paper: The Spectrum of Artificial Intelligence – Companion to the FPF AI Infographic to expand on the information included in this educational resource, and describe in more detail how the graphic can be used as an aide in education or in developing legislation or other regulatory guidance around AI-based systems. We identify additional, specific use cases for various AI technologies and explain how the differing algorithmic architecture and data demands present varying risks and benefits. We discuss the spectrum of algorithmic technology and demonstrate how design factors, data use, and model training processes should be considered for specific regulatory approaches.

Artificial intelligence is a term with a long history. Meant to denote those systems which accomplish tasks otherwise understood to require human intelligence, AI is directly connected to the development of computer science but is based on a myriad of academic fields and disciplines, including philosophy, social science, physics, mathematics, logic, statistics, and ethics. AI, as it is designed and used today, is made possible by the recent advent of unprecedentedly large datasets, increased computational power, advances in data science, machine learning, and statistical modeling. AI models include programming and system design based on a number of sub-categories, such as robotics, expert systems, scheduling and planning systems, natural language processing, neural networks, computer sensing, and machine learning. In many cases of consumer facing AI, multiple forms of AI are used together to accomplish the overall performance goal specified for the system. In addition to considerations of algorithmic design, data flows, and programming languages, AI systems are most robust for use in equitable and stable consumer uses when human designers also consider limitations of machine hardware, cybersecurity, and user-interface design.

This paper outlines the spectrum of AI technology, from rules-based and symbolic AI to advanced, developing forms of neural networks, and seeks to put them in the context of other sciences and disciplines, as well as emphasize the importance of security, user interface, and other design factors. Additionally, we seek to make this understandable through providing specific use cases for the various types of AI and by showing how the different architecture and data demands present specific risks and benefits.

Across the spectrum, AI is a combination of various types of reasoning. Rules-based or Symbolic AI is the form of algorithmic design wherein humans draft a complete program of logical rules for a computer to follow. Newer AI advances, particularly in machine learning systems based on neural networks, are able to power computers that carry out the programmer’s initial design but then adapt based on what the system can glean from patterns in the data. These systems can score the accuracy of their results and then connect those outcomes back into the code in order to improve the success of succeeding iterations of the program.

AI systems operate across a broad spectrum of scale. Processes using these technologies can be designed to seek solutions to macro level problems like environmental challenges: undetected earthquakes, pollution control, and other natural disaster responses. They are also incorporated into personal level systems for greater access to commercial, educational, economic, and professional opportunities. If regulation is to be effective, it should focus on both technical details and the underlying values and rights that must be protected from adverse uses of AI, to ensure that AI is ultimately used to promote human dignity and welfare.

A .pdf version of the printed paper is available here.

Now, On the Internet, EVERYONE Knows You’re a Dog

An Introduction to Digital Identity

By Noah Katz and Brenda Leong

What is Digital Identity?

As you go through your day, everyone wants to know something about you. The bouncer at a bar needs to know you are over 21, the park ranger wants to see your fishing license, your doctor has to review your medical history, a lender needs to know your credit score, the police must check your driver’s license, and airport security has to confirm your ticket and passport. In the past, you would have a separate piece of paper or plastic for each of these exchanges, but the Information Revolution has caused a dramatic shift to digital and virtual channels. Shopping, banking, healthcare, gaming, even therapy and exercise, are all activities that can now be performed partially or entirely using online platforms and services. However, systems using digital transactions struggle to establish trust around personal identification because personal login credentials vary for every account and passwords are forgettable and frequently insecure. Largely because of this “trust gap,” the equivalent of personal identity credentials like a passport and driver’s license have notably lagged other services in moving to an online format. That is starting to change. 

Potentially, all these tasks can be accomplished with a single “digital identity,” a system of electronically stored attributes and/or credentials that uniquely identify an individual person. Digital identity systems vary in complexity. At its most basic, a digital ID would simply recreate a physical ID in a digital format. For instance, digital driver’s licenses are coming to augment, and possibly eventually replace, the physical driver’s license or state-issued ID we carry now. Available via an app that provides the platform for verification and security, these digital versions can be used in the same way as a physical ID, to provide for authentication of our individual attributes like a unique ID number (Social Security number), birthdate (Driver’s License), citizenship (passport) or other government-issued, legal aspects of personhood. 

At the other end of the spectrum, a fully integrated digital identity system would provide a platform for a complete wallet and verification process, usable both online and in the physical world. That is, it would authenticate you as an individual, as above, but also tie to all the accounts and access rights you hold, including the credentials represented by those attributes. Such a system would enable you to share or verify your school transcripts or awarded degrees, provide your health records, or access your online accounts and stored data. This sort of credentialing program can also act as an electronic signature, timestamp, or seal for financial and legal transactions. 

There are a variety of technologies being explored to provide this type of platform, although there is no clear consensus or standard at this time. There are those who advocate for “self-sovereign identity,” wherein a blockchain-based platform allows individuals to directly control the release of their own information to designated recipients. There are also mobile-based systems that use a combination of cloud and local storage via a mobile device in conjunction with an app to offer a single identity verification platform. 

These proposed identification systems are being designed for use in commercial circumstances as well as for accessing government systems and benefits. In countries with national identification cards (most countries other than the U.S and the UK), the national ID may come to be issued digitally even sooner. Estonia has the most advanced example of such a system, and everyone there who has been issued a government ID can provide digital signatures and authentication via their mobile platform as well as use it as a driver’s license, a health service identifier, a pass to public transport, a travel document, to vote, or for banking.

The concept of named spaces and creating unique identifiers is older than the internet itself. Started in 1841, and fully computerized by the 1970s, Dun and Bradstreet operate a database containing detailed credit information on over 285 million businesses, making them one of the key providers of analytics and other services for over a century of commercial data. Their unique 9-digit identifier is the foundation of their entire system. 

The UK’s Companies House, the state registrar for public companies, traces back to the Joint Stock Companies Act of 1844, and the formation of shareholder enterprises. Like D&B, companies are recorded on a public register, but with the added requirement to include the personal data that the Registrar maintains on company personnel; for example, Directors must record name, address, occupation, nationality, and date of birth. The advent of mandatory passports in the twentieth century, along with pseudonymous identification of individuals by governments, such as with Social Security numbers, furthered this trend of personal records based on unique individual identities (and not without controversy).

With the advent of the internet, online identities exploded into every facet of financial, commercial, entertainment, and educational or professional lives, and today many people have tens, if not hundreds, of personal accounts and affiliations, each with a unique, numbered, or assigned digital record. Maintaining awareness of all our accounts has become almost impossible, much less having adequate and accurate oversight as to the security of each vendor, site, or set of login credentials. The possibility of transitioning these accounts to be interoperable with a single, secure digital ID is now becoming more feasible due to advances in mobile technology, faster and less expensive biometric systems, and the availability of cloud services and fast processing capabilities.  

How Digital Identity Works

In the past, a new patient at the doctor’s office must have provided at least three separately sourced documents: a driver’s license, a health insurance card, and medical history. Even now, many offices take a physical license or insurance card and make a digital copy for their file. A digital wallet would allow a new patient to identify themselves, provide proof of insurance, and medical history all at once, via their smartphone or other access option.

Importantly, by digitally sending a one-time identity token directly to the vendor or health provider, these systems can be designed to provide the authentication or verification of a status or credential (e.g., an awarded degree), without physically handing over a smartphone and without providing the underlying data (the full transcript). By granularly releasing identity data as necessary for authorization, an ID holder will not have to include or provide more information than is needed to complete the transaction. That bouncer at the bar simply must know you are “over 21,” not your actual birthdate, much less your name and address.

An effective digital ID must be able to perform at least four main tasks:

To authenticate an individual, the system must ensure that a person is who they claim to be, protecting sufficiently against both false negatives – not allowing access to the legitimate account holder, as well as false positives – wrongly allowing access to unauthorized individuals. Security best practices require that authentication be accomplished via a multi-factor system, requiring two of the three options: something you know (password or pin code, security question), something you have (a smart card, specific mobile device, or USB token), or something you are (a biometric). 

[NOTE: a biometric is a unique, measurable physical characteristic which can be used to recognize or identify a specific individual. Facial images, fingerprints, and iris scans samples are all examples of biometrics. For authentication purposes, such as in the digital identity systems under discussion, biometrics are matched on a 1:1 or 1:few process against an enrolled template. The template, specific to the system provider and not interoperable with other systems, may be stored locally on the device, or in cloud storage. However, since operational or circumstantial considerations may preclude the use of biometrics in all cases, systems intended for mass access must offer alternatives as well. The details of biometric systems and the particular risks and benefits thereof are beyond the scope of this article, but while not all digital identity systems are based on biometrics, most will likely include some form of biometric within their authentication processing.]

Once an ID holder is authenticated, the specific attributes or credentials must be verified. This involves confirming that the ID holder has earned or been issued the credentialed attributes they are claiming, whether from a financial institution, an employer, an educational institution, or a government agency.  

Authentication and verification may be all that is required for some transactions, but where needed, the system must also be able to confirm authorization, that is, to determine what the person is allowed to see or do within a given system. Successful privacy and security for businesses, organizations, and governments require the enforcement of rigorous access controls. Who can see certain data is not always the same as the person authorized to change or manipulate it. The person authorized to manipulate or process it may not be entitled to share it or delete it.  Successfully setting and enforcing these controls is one of the most challenging features for any organization which collects and uses personal data.

While the first three steps in digital identity systems exist in various forms already, a truly universal digital identity is likely to be successful at a mass scale only if it is federated, meaning that the ID must be usable across institutional, sectoral, and geographic boundaries. A federated identity system would be the most significant departure from every account-specific login or access process that exists today. To accomplish such wide-ranging compatibility will require a common set of open standards that institutions, sectors, and countries establish collaboratively and implement globally.  A digital wallet will need to seamlessly grant access across many networks, from a movie theater verifying over-17 aged entrants, banks processing loan applications, hospitals establishing patient status and access records, airports for boarding, or amusement parks and stadiums providing scheduled performances and perks.

Global banking and financial services are leading the way on this sort of broad implementation. Therefore, online banking is a constructive digital ID use case:

Banks are motivated to forge ahead on such digital identity systems to improve fraud detection, streamline “know your customer” compliance processes, increase their ability to stop money-laundering and other finance-related crimes, and offer superior customer experiences. But by creating secure, standardized digital identity access for online banking, they may also offer engagement to the large portions of the globe that are currently un- or under-banked, and/or who have minimal governmental infrastructure around legal identity systems.

The Challenges and Opportunities

Privacy, security, equity, data abuse, and user control all raise unique challenges and opportunities for digital ID. 

Digital identity, if not deployed correctly, may undermine privacy rights. If not implemented responsibility, and carefully controlled with both technical and legal safeguards, digital IDs might allow for increased location-tracking and user profiling, already a concern with cell phone technology. Blockchain technology, if not designed carefully, creates a public, immutable record of information exchanges, regarding where, when, and why a digital ID was requested. And a given digital ID provider may have too much power, with the ability to suppress ID holders from accessing their digital accounts. However, digital IDs also offer the possibility of increased privacy protection if systems are effectively designed to share only the minimum, necessary information, and identification is only established up to the level necessary for the particular exchange or transaction.  “Privacy by design,” as well as appropriate defaults and system controls, can prohibit any of the network participants, including the operator, from complete access to the users’ transactions, particularly if accompanied by appropriate legislative or regulatory boundaries. 

Digital ID likewise has both pros and cons for security. While not perfect, Digital IDs are generally harder to lose or be counterfeited than a physical document; and offer significantly greater security than an individual’s hundreds of separate login credentials across sites of uncertain levels of protection. However, poor adherence to best practices may result in a centralized location of personal and sensitive information, which may become a more appealing target for hackers, and increase the risk of a mass compromise of information. Centralized databases can be minimized by local storage of authenticating factors like biometrics, and distributed storage of other data with appropriate security measures and controls.

Inequities can occur along a number of different axes. Since digital identity designs may reflect society’s biases, it is important to mandate and continually measure inclusion and performance. For instance, the UK’s digital ID framework requires the ID issuers to submit an annual exclusion report. In addition, because not everyone has a smartphone or internet access, digital IDs risk increasing inequities among those with limited connectivity. Without reliable digital access, groups that have traditionally struggled may continue to lack the privileges that digital IDs promise to provide. On the other hand, according to the World Bank, an estimated 1.1 billion people worldwide cannot officially prove or establish their legal identity. In countries or situations without clear legal processes, or lacking information infrastructures, digital identity systems have the potential to provide people who do have smartphones or internet access the ability to receive healthcare, education, finance, and other essential services. Even those without access to a digital device could use a secure physical form, like a barcode, to maintain their digital identity.  

Policy Impacts and Conclusion

Individuals are used to the ability to easily control the use of their physical documents. When you hand your passport to a TSA agent, you observe who is seeing it and how it is being used. A digital ID holder will need these same assurances, understanding, and trust. Therefore, ideally, users should be able to identify every instance that their identity was accessed by a vendor. Early systems, like the Australian Digital License App, give citizens some control over their credentials by enabling users themselves to specify the information to share or display. Legislative bodies and regulatory agencies designing or controlling such systems should work closely with industry representatives, security experts, consumer protection organizations, civil rights advocates, and other stakeholders to ensure fair systems are established and monitored appropriately. 

Transparency of development, and public adoption processes and procurement systems will be vital to public trust in any such systems, whether privately or publicly operated. In some cases, such systems may even help educate and increase awareness among users of the information that is already collected and held about them, and where and how it is being used, as well as make it easier for them to exert control easily for the necessary sharing of their information.

Digital identification, integrated to greater or lesser degrees, seems an almost inevitable next step in our digital lives, and overall offers promising opportunities to improve our access and controls over the information already spanning the internet about us. But it is crucial that moving forward, digital ID systems are responsibly designed, implemented, and regulated to ensure the necessary privacy and security standards, as well as prevent the abuse of individuals or the perpetuation of inequities against vulnerable populations.   While there are important cautions, digital identity has the potential to transform the way we interact with the world, as our “selves” take on new dimensions and opportunities. 

At the intersection of AI and Data Protection law: Automated Decision-Making Rules, a Global Perspective (CPDP LatAm Panel)

On Thursday, 15th of July 2021, the Future of Privacy Forum (FPF) organised during the CPDP LatAm Conference a panel titled ‘At the Intersection of AI and Data Protection law: Automated Decision Making Rules, a Global Perspective’. The aim of the Panel was to explore how existing data protection laws around the world apply to profiling and automated decision making practices. In light of the European Commission’s recent AI Regulation proposal, it is important to explore the way and the extent to which existing laws already protect individuals’ fundamental rights and freedoms against automated processing activities driven by AI technologies. 

The panel consisted of Katerina Demetzou, Policy Fellow for Global Privacy at the Future of Privacy Forum; Simon Hania, Senior Director and Data Protection Officer at Uber; Prof. Laura Schertel Mendes, Law Professor at the University of Brasilia and Eduardo Bertoni, Representative for the Regional Office for South America, Interamerican Institute of Human Rights. The panel discussion was moderated by Dr. Gabriela Zanfir–Fortuna, Director for Global Privacy at the Future of Privacy Forum.

web 3120321 1920

Data Protection laws apply to ADM Practices in light of specific provisions and/or of their broad material scope

To kick-off the conversation, we presented preliminary results of an ongoing project led by the Global Privacy Team at FPF on Automated Decision Making (ADM) around the world. Seven jurisdictions were presented comparatively, among which five already have a general data protection law in force (EU, Brazil, Japan, South Korea, South Africa), while two jurisdictions have data protection bills expected to become laws in 2021 (China and India).

For the purposes of this analysis, the following provisions are being examined: the definitions of ‘processing operation’ and ‘personal data’ given that they are two concepts essential for defining the material scope of the data protection law; the principles of fairness and transparency and legal obligations and rights that relate to these two principles (e.g., right of access, right to an explanation, right to meaningful information etc.); provisions that specifically refer to ADM and profiling (e.g., Article 22 GDPR). 

The preliminary findings are summarized in the following points:

Uber, Ola and Foodinho Cases: National Courts and DPAs decide on ADM cases on the basis of existing laws

In recent months, Dutch national Courts and the Italian Data Protection Authority have ruled on complaints brought by employees of the ride-hailing companies Uber and Ola and the food delivery company Foodinho challenging the companies’ decisions reached with the use of algorithms. Simon Hania summarised the key points of these court decisions. It is important to mention that all cases appeared in the employment context and were all submitted back in 2019. That means that more outcomes of ADM cases may be expected in the near future. 

The first Uber case referred to the matching between drivers and riders which, as the Court judged, qualifies as an ADM based solely on automated means that however does not lead to any ‘legal or similarly significant effect’. Therefore, Article 22 GDPR is not applicable. The second Uber case referred to the deactivation of drivers’ accounts due to signals of potentially fraudulent behaviour or misconduct of the drivers. There, the Court judged that Article 22 is not applicable because, as the company proved, there is always human intervention before an account is deactivated and the actual final decision is made by a human. 

The third example presented was the Ola case, whereby the Court decided that the company’s decision of withholding drivers’ money as an act of penalizing their misconduct qualifies as an automated decision based solely on automated means , producing a ‘legal or similarly significant effect’, and therefore Article 22 GDPR applies. 

In the last example of Foodinho, the decision-making on how well couriers perform was indeed deemed by the Court to be based solely on automated means and it produced a significant effect on the data subjects (the couriers). The problem was highlighted to be the way that the performance metrics were established and specifically on the accuracy of the profiles created. They were not sufficiently accurate for the significance of the effect they would bring. 

This last point spurs the discussion on the importance of the principle of data accuracy which is an often overlooked principle. Having accurate data as the basis for decision making is crucial in order to avoid discriminatory practices and achieve fairer AI systems. As Simon Hania emphasised, we should have information available that is fit for purpose in order to reach accurate decisions. This suggests that the data minimisation principle should be understood as data rightsizing and not as requiring to purely minimise information processed for a decision to be reached.

LGPD: Brazil’s Data Protection Law and its application to ADM practices

The LGPD, Brazil’s recently passed data protection law, is heavily influenced by the EU GDPR in general, but also specifically on the topic of ADM processing. Article 20 of the LGPD protects individuals against decisions that are made only on the basis of automated processing of personal data, when these decisions “affect their interests”. The wording of this provision seems to suggest a wider protection than the relevant Article 22 of the GDPR which requires that the decision “has a legal effect or significantly affects the data subject”. Additionally, Article 20 LGPD provides individuals with a right to an explanation and with the right to request a review of the decision. 

In her presentation, Laura Mendes highlighted two points that require further clarification: first of all, it is still unclear what the definition of “solely automated” is. Secondly, it is not clear what the degree of the review of the decision should be and also whether the review shall be performed by a human. There are two provisions core to the discussion on ADM practices: 

(a) Art 6 IX LGPD, which introduces the principle of non-discrimination as a separate data protection principle. According to it, processing of data shall not take place for “discriminatory, unlawful or abusive purposes”. 

(b) Article 21 LGPD reads “The personal data relating to the regular exercise of rights by the data subjects cannot be used against them.” As Laura Mendes suggested, Article 21 LGPD is a provision with great potential regarding non-discrimination in ADM. 

Latin America & ADM Regulation: there is no homogeneity in Latin American laws but the Ibero-American Network seems to be setting a common tone

In the last part of the panel discussion, a wider picture of the situation in Latin America was presented. It should be clear that Latin America does not have a common, homogenous approach towards data protection. For example, while Argentina has had a data protection law since 2000 for which it obtained an adequacy decision with the EU, Chile is in the process of adopting a data protection law but still has a long way to go, while Peru, Ecuador and Colombia are trying to modernize their laws. 

The American Convention of Human Rights recognises a right to privacy and a right to intimacy, but there is still no interpretation by the Interamerican Court of Human Rights neither on the right to data protection nor specifically on the topic of ADM practices. However, it should be kept in mind that as was the case with Brazil’s LGPD, the GDPR has highly influenced Latin America’s approach to data protection. Another common reference for Latin American countries is the Ibero-American Network which, as Eduardo Bertoni explained in his talk, while it does not produce hard law, it publishes recommendations that are followed by the respective jurisdictions. Regarding specifically the discussion on ADM, Eduardo Bertoni mentioned the following initiatives taken in the Ibero-American space:

Main Takeaways

While there is an ongoing debate around the regulation of AI systems and automated processing in light of the recently proposed EU AI Act, this panel brought attention to existing data protection laws which are equipped with provisions that protect individuals against automated processing operations. The main takeaways of this panel are the following:

Looking ahead, the debate around the regulation of AI systems will continue to be heated and the protection of fundamental rights and freedoms in light of automated processing operations will still appear as a top priority. In this debate we should keep in mind that the proposed AI Regulation is being introduced in an already existing system of laws, as is data protection law, consumer law, labour law, etc. It is important to have clear what is the reach and the nature of these laws so as to be able to identify the gap that the AI Regulation or any other future proposal comes to fill. This panel highlighted that ADM and automated processing is not unregulated. On the contrary, current laws protect individuals by putting in place binding overarching principles, legal obligations and rights. At the same time, Courts and national authorities have already started enforcing these laws. 

Watch a recording of the panel HERE.

Read more from our Global Privacy series:

Insights into the future of data protection enforcement: Regulatory strategies of European Data Protection Authorities for 2021-2022

Spotlight on the emerging Chinese data protection framework: Lessons learned from the unprecedented investigation of Didi Chuxing

A new era for Japanese Data Protection: 2020 Amendments to the APPI

Image by digital designer from Pixabay

Insights into the Future of Data Protection Enforcement: Regulatory Strategies of European Data Protection Authorities for 2021-2022

The Future of Privacy Forum released a report that brings “Insights into the future of data protection enforcement: Regulatory strategies of European Data Protection Authorities for 2021-2022”.

The European Data Protection Authorities (DPAs) are arguably the most powerful data protection and privacy regulators in the world, having been granted by the European Union’s General Data Protection Regulation (GDPR) broad powers and competences, in addition to independence. With GDPR enforcement visibly ramping up in the past year, it is important to get insight into the key enforcement areas targeted by regulators, as well as understanding what are those complex or sensitive personal processing activities where DPAs plan to provide compliance guidelines or to shape public policy.

Last year, FPF released a report called New Decade, New Priorities: A summary of twelve European Data Protection Authorities’ strategic and operational plans for 2020 and beyond. It outlined EU DPAs’ regulatory priorities for 2020 and the ensuing years, based on the documents of a strategic nature released by such authorities in the first half of last year. Since then, most DPAs have published their 2020 annual reports, as well as novel short or long-term strategies. These shed light on the areas to which DPAs are likely to devote significant regulatory efforts and resources, with a broad scope: guidance, awareness-raising, corrective measures, and enforcement actions.

We have compiled and analyzed these novel strategic documents, describing where different DPA strategies have touchpoints and noteworthy particularities. The report contains links to and translated summaries of 15 DPAs’ strategic documents from DPAs in France (FR), Portugal (PT), Belgium (BE), Norway (NO), Sweden (SE), Ireland (IE), Bulgaria (BG), Denmark (DK), Finland (FI), Latvia (LV), Lithuania (LT), Luxembourg (LU) and Germany (Bavaria). The analysis also includes documents published by the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS). These documents complement or replace the ones that were included in our 2020 report.

screen shot 2021 07 28 at 1.11.36 pm

Some of our main conclusions include: 

Spotlight on the Emerging Chinese Data Protection Framework: Lessons Learned from the Unprecedented Investigation of Didi Chuxing

As China is making headlines with its ramped up efforts to build a comprehensive data protection, privacy and cybersecurity legal framework with broad extraterritorial effects, a recent landmark enforcement action of the Cyberspace Administration of China (CAC) against one of the largest tech companies in the country shines a light on how the future privacy regulator and the Chinese government are approaching enforcement of data-related laws. 

On July 2, 2021, CAC announced the launch of a cybersecurity review of Didi Chuxing (Didi), two days after the Chinese ride-hailing leader’s $4 billion IPO in New York. This is the first time the CAC has publicly initiated this type of review, likely making the Didi investigation the highest profile case launched under China’s emerging data protection framework. 

The “cybersecurity review” is a relatively new enforcement tool in CAC’s regulatory toolbox, introduced by the Cybersecurity Review Measures (CRM) in June 2020. It addresses “critical information infrastructure operators”, who must anticipate potential national security risks of network products and services. If the operator determines that these products and services influence national security, the operator must submit a cybersecurity review to the CAC. Upon receiving the review, the CAC then determines whether to launch a formal administrative audit to test for compliance and security – and this is the first time it publicly decided to do so.   

The consequence? The CAC demanded that Didi prevent new users from signing up to its service until completion of the review. Following this, regulators also issued a notice requesting the company remove its 25 related apps from the app-store and announced reviews of three more Chinese companies. These measures were imposed before the investigation is completed, making them some of the toughest preliminary measures in data-related investigations from digital regulators around the world. 

Below we consider the high level takeaways of the decision and its implications for data protection in China going forward, considering that the CAC is designated as the enforcer of the future Personal Information Protection Law (PIPL) which is at its second reading and is expected to pass by the end of this year, and of the 2017 Cybersecurity Law (CSL).

1. “Critical Information Infrastructure Operators” could mean any tech company, including ride hailing service providers

Some of the most consequential provisions of the CSL and of the future PIPL are applicable to “Critical Information Infrastructure Operators”, a notion that has been surrounded by certain ambiguity since it was introduced in China’s legal framework. Article 37 of the Cybersecurity Law (CSL) requires these operators to store in China personal information and “important data” collected and generated by their operations within China. If they need to send such data abroad due to business necessity, they must additionally undergo a security assessment by competent authorities. 

Since the enactment of the CSL in 2017, the vague wording of the article has generated a lot of confusion as to what constitutes “critical information infrastructure”, as many have raised concerns that the open-ended nature of the definition could theoretically encompass all types of data processing activities. In part, the CRM aims to streamline this process and provide a mechanism for these operators to undergo mandatory security reviews under the CSL.

Additionally, the draft PIPL also includes specific rules for Critical Information Infrastructure Operators, in particular with regard to international data transfers. The draft law requires that where such operators need to provide personal information abroad, they must “pass a security assessment organized by the State cybersecurity and informatization department” (Article 40), softening thus the strict data localisation requirement. This seems to be the same type of security audit performed in the Didi investigation, making this case relevant to understand how the CAC and Chinese authorities approach such an assessment.

The fact that Didi, a ride-hailing company, is treated as a Critical Information Infrastructure Operator indicates that the authorities indeed take a broad view of what “critical infrastructure” means, creating more uncertainty with regard to what entities are subject to this enhanced level of scrutiny. 

2. There is a link between mobility data, including location data, and national security, making it an ideal candidate for “important data” 

While a detailed motivation for the “cybersecurity review” of Didi has not been published beyond announcing that it is happening pursuant to the CRM, considering the nature of data processed by a ride hailing company like Didi indicates that the authorities see a link between mobility data, data localization requirements and national security.   

Chinese regulators have recently published draft rules for privacy and security in the connected vehicles industry through the Draft Several Provisions on the Management of Automobile Data Security, which require operators to block cross-border data transfers when the “management and legal systems are imperfect.”  While ambiguous, this provision leaves open the possibility that insufficient compliance provides a ground for strict localization.  Because access to data is critical for success in this industry, the Chinese government has attached significant importance to mobility relevant data.

In fact, through this investigation, the authorities may also be signaling to the market that such data could qualify as “important data” — a concept used in many data protection laws in China to refer to information that implicates national security. Indeed, the intertwining of national security with personal information protection is becoming one of the key features of China’s data protection regime. 

The use of the CRM process to crack down on Didi indicates that Beijing is taking the disclosure of such information very seriously. Reports indicate that the CAC requested Didi to alter its mapping function prior to its IPO in the United States. Didi’s mapping function is primarily responsible for organizing complex location data, including pickup and drop-off locations of rides. Given some mandatory disclosure requirements under U.S. securities law for companies that want to be publicly traded, particularly the recently enacted Holding Foreign Companies Accountable Act (HFCAA), the Chinese government may have been overcautious in wanting to prevent disclosing any sensitive information to US authorities – even though, in fact, US law does not require Didi to disclose this type of user information. 

3. The new data protection framework will likely not be a paper tiger

Removing all connected apps from the app store and preventing any new users from registering with the company are measures that have very serious consequences on a tech company and its business. These measures have been requested by the CAC as preliminary measures to be implemented for the duration of the review concerning Didi. They also confirm a radical turn in the government’s attitude towards enforcement and compliance in the digital market, following a trend that has been visible already for a while.

The increased pressure to comply stands in contrast with the previous approach the Chinese government took towards the growth of Internet companies. In the past, emerging tech firms were allowed to adopt a laissez-faire approach to their growth strategies and often pursued aggressive data collection and processing methods to compete with other tech giants and their global competitors while downplaying the importance of compliance. With the emergence of broad cybersecurity and data protection measures like the Data Security Law (DSL) – which will enter into force this September, the draft PIPL and the 2017 CSL, Chinese regulators are signaling that non-compliance will result in real penalties. 

The Didi decision also exposes some ambiguity in China’s regulatory system and in the way it is enforced. Particularly, it is difficult to pinpoint exactly where the review was initiated in China’s complex administrative bureaucracy. 12 ministries help coordinate cybersecurity reviews under the CRM, each with their own offices, personnel, and regulatory competences. In many instances, these competences may conflict as regulators attempt to assert their own prerogatives and priorities over particular tech companies. The CAC announced in mid-July that more than six Chinese agencies had initiated investigations into Didi regarding its planned IPO.  While Internet regulation is largely centralized through the CAC, specific decisions are carried out by a range of other government bodies, sometimes in coordination with each other and sometimes not. 

Administrative clarification could help produce more certainty in the system but without guidance from the very top, agencies and government offices within China’s bureaucracy often take their own initiative to demarcate regulatory boundaries. Domestic backlash in China against large tech companies has also fueled recent regulatory developments and provided the Chinese government a popular mandate to strengthen national control over the platform economy.

4. Data protection, understood broadly, seems to be used as a lever to effectuate broader policy goals

Data protection seems to be understood broadly in China as encompassing cybersecurity, data governance, data localization as much as, or even more than privacy considerations and fair information practices with regard to personal information. 

As stated above, the Didi review is the first public announcement of an investigation under the CRM since its formal enactment in June 2020. In many ways unprecedented, the Didi investigation is a consolidating step in China’s emerging data protection framework. This framework includes a range of regulatory tools designed to strengthen control over the digital economy and reign in the influence of large platforms in China. Chinese regulators view the solidification of these regulatory tools as the next stage in the evolution of data governance and the digital economy more broadly.  

It also looks like this consolidated regulatory regime is becoming a tool for Chinese regulators to assert their economic interests both at home and abroad and to effectuate broader policy goals in the geopolitical arena. For example, Didi’s cybersecurity review may be more directly related to other regulatory goals, such as incentivizing Chinese tech companies to host public offerings in Hong Kong or Shanghai rather than foreign markets. Indeed, in the past few weeks, many Chinese companies have followed suit and halted their IPO prospects in the face of cybersecurity audits. 

This cybersecurity probe complements recent actions taken by the Chinese government to impose more oversight on overseas listings. For instance, the Data Security Law, which will go into effect in September, requires that Chinese entities not provide data stored within China to “foreign justice or law enforcement bodies” without permission from regulatory authorities. The CAC seems to be taking an active role in monitoring the oversea foreign listings of Chinese tech firms through its cybersecurity competences and the CRM process. 

The broader policy goals pursued by the data security requirements, cybersecurity review measures and the new personal information protection regime are becoming increasingly visible. On July 10 the CAC issued a notice to solicit comments for a revision of the CRM — only just one week after the public announcement of Didi’s review under the same legal regime. The revisions include:

In particular, the inclusion of this bright line threshold is significant because it indicates that Chinese regulators are strengthening the country’s personal information protection system even before the solidification of the PIPL. Internet companies in China now must make a correlation between their processing of personal information and internal cybersecurity audits both before and after offering these services abroad. 

With the newly revised CRM and recent Didi investigation, cybersecurity audits are becoming a central feature of China’s data protection ecosystem – one that turns on both the protection of personal and non-personal information. The formal adoption of the DSL and PIPL this year will add to this but in many respects Chinese regulators have already begun laying the groundwork of a new, far reaching, regulatory regime for data and the tech sector. 

Uniform Law Commission Finalizes Model State Privacy Law

This month, the Uniform Law Commission (ULC) voted to approve the Uniform Personal Data Protection Act (UPDPA), a model bill designed to provide a template for uniform state privacy legislation. After some final amendments, it will be ready to be introduced in state legislatures in January 2022. 

The ULC has been engaged in an effort to draft a model state privacy law since 2019, with the input of advisors and observers, including the Future of Privacy Forum. First established in 1892, the ULC has a mission of “providing states with non-partisan, well-conceived and well-drafted legislation that brings clarity and stability to critical areas of state statutory law.” Over time, many of its legislative efforts have been very influential and become law in the United States — for instance, the ULC drafted the Uniform Commercial Code in 1952. More recently, the ULC drafted the Uniform Fiduciary Access to Digital Assets Act (2014-15), which has been adopted in at least 46 states.

The UPDPA departs in form and substance from existing privacy and data protection laws in the U.S., and indeed internationally. The law would provide individuals with fewer, and more limited, rights to access and otherwise control data, with broad exemptions for pseudonymized data. Further, narrowing the scope of application, UPDPA only applies to personal data “maintained” in a “system of records” used to retrieve records about individual data subjects for the purpose of individualized communication or decisional treatment. The Prefatory Note of a late-stage draft of the UPDPA notes that it seeks to avoid “the compliance and regulatory costs associated with the California and Virginia regimes.” 

Central to the framework, however, is a useful distinction between “compatible,” “incompatible,” and “prohibited” data practices, which moves beyond a purely consent model based on the likelihood that the data practice may benefit or harm a data subject. We also find that the model laws’ treatment of Voluntary Consensus Standards offers a unique approach towards implementation that is context and sector-specific. Overall, we think the ULC model bill offers an interesting alternative for privacy regulation. However, because it departs significantly from existing frameworks, it could be slow to be adopted in states that are concerned with interoperability with recent laws passed in California, Virginia, and Colorado. 

The summary below provides an overview of the key features of the model law, including:

Read More:

Scope 

UPDPA applies to controllers and processors that “conduct business” or “produce products or provide services purposefully directed to residents” in the state of enactment. Government entities are excluded from the scope of the Act. 

To be covered, businesses must meet one of the following thresholds:

  1. during a calendar year maintains personal data about more than [50,000] data subjects who are residents of the state, excluding data subjects whose data is collected or maintained solely to complete a payment transaction;
  2. earns more than [50] percent of its gross annual revenue during a calendar year from maintaining personal data from data subjects as a controller or processor;
  3. is a processor acting on behalf of a controller the processor knows or has reason to know satisfies paragraph (1) or (2); or
  4. maintains personal data, unless it processes the personal data solely using compatible data practices

The effect of threshold (4) is that UPDPA applies to smaller firms that maintain personal data, but relieves them of compliance obligations as long as they use the personal data only for “compatible” purposes. 

Maintained Personal Data 

UPDPA applies to “personal data,” which includes (1) records that identify or describe a data subject by a direct identifier, or (2) pseudonymized data. The term does not include “deidentified data.” UPDPA also does not apply to information processed or maintained in the course of employment or application for employment, and “publicly available information,” defined as information lawfully made available from a government record, or available to the general public in “widely distributed media.”

Narrowing the scope of application, UPDPA only applies to personal data “maintained” in a “system of records” used to retrieve records about individual data subjects for the purpose of individualized communication or decisional treatment. The committee has commented that the definition of “maintains” is pivotal to understanding the scope of UPDPA. To the extent that data collected by businesses related to individuals is not maintained as a system of records for the purpose and function of making individualized assessments, decisions, or communications, it would not be within the scope of the Act (for instance, if it were maintained in the form of emails or personal photographs). According to the committee, the definition of “maintains” is modeled after the federal Privacy Act’s definitions of “maintains” and “system of records”. 5 U.S.C. §552a(a)(3), (a)(5).

Rights of Access and Correction

Access and Correction Rights: UPDPA grants data subjects the rights to access and correct personal data, excluding personal data that is pseudonymized and not maintained with sensitive data (as described below). Controllers are only required to comply with authenticated data subject requests. “Data subject” is defined as an individual who is identified or described by personal data. According to the committee, the access and correction rights extend not only to personal information provided by a data subject, but also commingled personal data collected by the controller from other sources, such as public sources, and from other firms. 

Non-Discrimination: UPDPA prohibits controllers from denying a good or service, charging a different rate, or providing a different level of quality to a data subject in retaliation for exercising one of these rights. However, controllers may still make a data subject ineligible to participate in a program if the corrected information requested by them makes them ineligible, as specified by the program’s terms of service. 

No Deletion Right: Notably, UPDPA does not grant individuals the right to delete their personal data. The ULC committee has enumerated various reasons for taking this approach, including: (1) the wide range of legitimate interests for controllers to retain personal data, (2) difficulties associated with ensuring that data is deleted, given how it is currently stored and processed, and (3) compatibility with the First Amendment of the U.S. Constitution (free speech). The committee has also stated that UPDPA’s restrictions on processing for compatible uses or incompatible uses with consent should provide sufficient protection.

Pseudonymized Data 

Pseudonymized data” is defined as “personal data without a direct identifier that can be reasonably linked to a data subject’s identity or is maintained to allow individualized communication with, or treatment of, the data subject. The term includes a record without a direct identifier if the record contains an internet protocol address, a browser, software, or hardware identification code, a persistent unique code, or other data related to a particular device.”

Pseudonymized data is subject to fewer restrictions than more identifiable forms of personal data. Generally, consumer rights contained in UPDPA (access and correction) do not apply to pseudonymized data. However, these rights do still apply to “sensitive” pseudonymized data to the extent that it is maintained in a way that renders the data retrievable for individualized communications and treatment. 

Sensitive data” includes personal data that reveals: (A) racial or ethnic origin, religious belief, gender, sexual orientation, citizenship, or immigration status; (B) credentials sufficient to access an account remotely; (C) a credit or debit card number or financial account number; (D) a Social Security number, tax-identification number, driver’s license number, military identification number, or an identifying number on a government-issued identification; (E) geolocation in real time; (F) a criminal record; (G) income; (H) diagnosis or treatment for a disease or health condition; (I) genetic sequencing information; or (J) information about a data subject the controller knows or has reason to know is under 13 years of age. 

In practice, the ULC committee has stated that a collecting controller that stores user credentials and customer profiles can avoid the access and correction obligations if it segregates its data into a key code and a pseudonymized database so that the data fields are stored with a unique code and no identifiers. The separate key will allow the controller to reidentify a user’s data when necessary or relevant for their interactions with the customers. Likewise, a collecting controller that creates a dataset for its own research use (without maintaining it in a way that allows for reassociation with the data subject) will not have to provide access or correction rights even if the pseudonymized data includes sensitive information. Additionally, a retailer that collects and transmits credit card data to the issuer of the credit card in order to facilitate a one-time credit card transaction is not maintaining this sensitive pseudonymized data.

Compatible, Incompatible, and Prohibited Data Practices 

UPDPA distinguishes between “compatible,” “incompatible,” and “prohibited” data practices. Compatible data practices are per se permissible, so controllers and processors may engage in these practices without obtaining consent from the data subject. Incompatible data practices are permitted for non-sensitive data if the data subject is given notice and an opportunity to withdraw consent (an opt-out right). However, opt-in consent is required for a controller to engage in incompatible data processing of “sensitive” personal data. Controllers are prohibited from engaging in prohibited data practices. 

UPDPA’s distinctions between “compatible,” “incompatible,” and “prohibited” data practices are based on the likelihood that the data practice may benefit or harm a data subject:

Compatible data practices: A controller or processor engages in a compatible data practice if the processing is “consistent with the ordinary expectations of data subjects or is likely to benefit data subjects substantially.” 

Incompatible data practices: A controller or processor engages in an incompatible data practice if the processing: 

Prohibited data practices: Processing personal data is a prohibited data practice if the processing is likely to: 

Responsibilities of Collecting Controllers, Third-Party Controllers, and Processors 

UPDPA creates different obligations for “controllers” and “processors,” and further distinguishes between “collecting controllers” and “third party controllers.”

If a person with access to personal data engages in processing that is not at the direction and request of a controller, that person becomes a controller rather than a processor, and is therefore subject to the obligations and constraints of a controller.

Aside from complying with access and correction requests, controllers have a number of additional responsibilities, such as notice and transparency obligations, obtaining consent for incompatible data practices, abstaining from engaging in a prohibited data practice, and conducting and maintaining data privacy and security risk assessments. Processors are also required to conduct and maintain data privacy and security risk assessments. 

Voluntary Consensus Standards 

UPDPA enables stakeholders representing a diverse range of industry, consumer, and public interest groups to negotiate and develop voluntary consensus standards (VCS’s) to comply with UPDPA in specific contexts. VCS’s may be developed to comply with any provision of UPDPA, such as the identification of compatible practices for an industry, the procedure for securing consent for an incompatible data practice, or practices that provide reasonable security for data. Once established and recognized by a state Attorneys General, any controller or processor can explicitly adopt and comply with it. It is also worth noting that compliance with a similar privacy or data protection law, such as the GDPR or CCPA, is sufficient to constitute compliance with UPDPA.

Enforcement and Rulemaking 

UPDPA grants State Attorneys General rulemaking and enforcement authority. The “enforcement authority, remedies, and penalties provided by the [state consumer protection act] apply to a violation” of UPDPA. However, notwithstanding this provision, “a private cause of action is not authorized for a violation of this Act or under the consumer protection statute for violations of this Act.” 

What’s Next for the UPDPA?

The future of the UPDPA is, as yet, unclear. The drafting committee is currently developing a legislative strategy for submitting the Act to state legislatures on a state-by-state basis. It remains to be seen whether state legislators will have an appetite to introduce, consider, and possibly pass the UPDPA during the legislative session of 2022 and beyond. As an alternative, legislators may wish to adapt certain elements of the model law, such as the voluntary consensus standards (VCS), flexibility for research, or the concept of “compatible,” “incompatible,” or “prohibited” data practices based on the likelihood of substantial benefit or harm to the data subject, rather than purely notice and consent.