Future of Privacy Forum Promotes Verdi, Zanfir-Fortuna & Vance
FPF has promoted three of its leaders to more senior roles at the growing international non-profit. John Verdi has been elevated to Senior Vice President of Policy, Dr. Gabriela Zanfir-Fortuna has been appointed Vice President of Global Privacy, and Amelia Vance is now Vice President of Youth and Education Privacy.
For more than five years, John Verdi has been integral to FPF’s success as a mentor to our staff and an advisor to privacy leaders in the public and private sectors. Our international, youth and education programs are respected resources for civil society, policymakers, and companies because of the leadership of Gabriela Zanfir-Fortuna and Amelia Vance. These three appointments reflect the growth of FPF as data protection issues impact organizations around the world.
Jules Polonetsky, FPF CEO
As Senior Vice President of Policy, John Verdi supervises FPF’s policy portfolio, which advances FPF’s agenda on a broad range of issues. Verdi came to FPF in 2016 after serving as the Director of Privacy Initiatives for the National Telecommunications and Information Administration, where he crafted policy recommendations for the U.S. Department of Commerce and the Obama Administration on technology and innovation. Verdi previously oversaw the Electronic Privacy Information Center’s litigation program as General Counsel.
In Gabriela Zanfir-Fortuna’s new role as Vice President for Global Privacy, she will lead FPF’s work on global privacy developments, advising on EU data protection law and policy and working with FPF’s offices in Europe and Asia Pacific, as well as partners around the world. Zanfir-Fortuna gained years of experience in EU and international privacy law while working for the European Data Protection Supervisor in Brussels, as well as the Article 29 Working Party.
As Vice President of Youth and Education Privacy, Amelia Vance advises policymakers, academics, companies, and schools on child and student privacy laws and best practices; oversees the Student Privacy Compass website; and convenes stakeholders to ensure the responsible use of child and student data. She is a regular speaker at privacy and education conferences in the U.S. and abroad, has testified before Congress, spoken on child and education privacy issues for the Federal Trade Commission and U.S. Department of Education, and is part of the group of experts reviewing the OECD revised recommendations on the protection of children online.
Over her five years at FPF, Vance has grown the youth and education privacy project to 12 full-time staff. She came to FPF after serving as the Director of Education Data and Technology at the National Association of State Boards of Education. Prior to that role, she was a legal fellow at the Institute of Museum and Library Services and the Family Equality Council, an intern at the White House, the State Department, and the Office of Congressman Sander Levin, and a Field Organizer for the 2008 Obama campaign.
Five Things Lawyers Need to Know About AI
By Aaina Agarwal, Patrick Hall, Sara Jordan, Brenda Leong
Note: This article is part of a larger series focused on managing the risks of artificial intelligence (AI) and analytics, tailored toward legal and privacy personnel. The series is a joint collaboration between bnh.ai, a boutique law firm specializing in AI and analytics, and the Future of Privacy Forum, a non-profit focusing on data governance for emerging technologies.
Behind all the hype, AI is an early-stage, high-risk technology that creates complex grounds for discrimination while also posing privacy, security, and other liability concerns. Given recent EU proposals and FTC guidance, AI is fast becoming a major topic of concern for lawyers. Because AI has the potential to transform industries and entire markets, those at the cutting edge of legal practice are naturally bullish about the opportunity to help their clients capture its economic value. Yet to act effectively as counsel, lawyers must also be vigilant of the very real challenges of AI. Lawyers are trained to respond to risks that threaten the market position or operating capital of their clients. However, when it comes to AI, it can be difficult for lawyers to provide the best guidance without some basic technical knowledge. This article shares some key insights from our shared experiences to help lawyers feel more at ease responding to AI questions when they arise.
I. AI Is Probabilistic, Complex, and Dynamic
There are many different types of AI, but over the past few decades, machine learning (ML) has become the dominant paradigm.[1] ML algorithms identify patterns in recorded data and apply those patterns to new data to try to make accurate decisions. This means that ML-based decisions are probabilistic in nature. Even if an ML system could be perfectly designed and implemented, it is statistically certain that at some point it will produce a wrong result. All ML systems incorporate probabilistic statistics, and those systems can make incorrect classifications, recommendations, or other outputs.
ML systems are also fantastically complex. Contemporary ML systems can learn billions or more rules from data and apply those rules on a myriad of interacting data inputs to arrive at an output recommendation. Embed that billion-rule ML system into an already-complex enterprise software application and even the most skilled engineers can lose track of precisely how the system works. To make matters worse, ML systems decay over time, losing their use-case fitness based on their initial training data. Most ML systems are trained on a snapshot of a dynamic world as represented by a static training dataset. When events in the real world drift, change, or crash (as in the case of COVID-19) away from the patterns reflected by that training dataset, ML systems are likely to become wrong more frequently and cause issues that require legal and technical attention. Even in the moment of the “snapshot,” there are other qualifiers for the reliability, effectiveness, and appropriateness of training data. How it’s collected, processed, and labeled all bear on whether it is sufficient to inform an AI system in a way fit for a given application or population.
While all this may sound intimidating, an existing regulatory framework addresses many of these basic performance risks. Large financial institutions have been deploying complex decision-making models for decades, and the Federal Reserve’s model risk management guidance (SR 11-7) lays out specific process and technical controls that are a useful starting point for handling the probabilistic, complex, and dynamic characteristics of AI systems. Most commercial AI projects would benefit from some aspect of model risk management, whether it’s being monitored by federal regulators or not. Lawyers at firms and in-house alike, who find themselves needing to consider AI-based systems, would do well to understand options and best practices for model risk management, starting with understanding and generalizing the guidance offered by SR 11-7.
II. Make Transparency an Actionable Priority
Immense complexity and unavoidable statistical probabilities in ML systems makes transparency a difficult task. Alas, parties deploying—and thereby profiting from—AI can nonetheless be held liable for issues relating to a lack of transparency. Governance frameworks should include steps to promote transparency, whether preemptively or as required by industry- or jurisdiction-specific regulations. For example, the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA) mandate customer-level explanations known as “adverse action notices” for automated decisions in the consumer finance space. These laws set an example for the content and timing of notifications relating to AI decisions that could adversely affect customers, as well as establish the terms of an appeals process against those decisions. Explanations that include a logical consumer recourse process dramatically decrease risks associated with AI-based products and help prepare organizations for future AI transparency requirements. New laws, like the California Privacy Rights Act (CPRA) and the proposed EU AI rules for high-risk AI systems, will likely require high levels of transparency, even for applications outside of financial services.
Some AI system decisions may be sufficiently interpretable to nontechnical stakeholders today, like the written adverse action notices mentioned above, in which reasons for certain decisions are spelled out in plain English to consumers. But oftentimes the more realistic goal for an AI system is to be explainable to its operators and direct overseers.[2]
The import of a system that’s not fully understood by its operators is that it is much harder to identify and sufficiently mitigate risks. One of the best strategies for promoting transparency, particularly in light of the challenges around “black-box” systems that are unfortunately common in the US today, is to rigorously pursue best practices with respect to AI system documentation. This is good news for lawyers who are adept in the skill and attention to detail that is required to institute and enforce such documentation practices. Standardized documentation of AI systems, with emphasis on development, measurement, and testing processes, is crucial to enable ongoing and effective governance of AI systems. Attorneys can help by creating templates for such documentation and by assuring that documented technology and development processes are legally defensible.
III. Bias is a Major Problem—But Not the Only Problem
Algorithmic bias can generally be thought of as outputs of an AI system that exhibits an unjustified differential treatment between two groups. AI systems learn from data, including its biases, and can perpetuate that bias on a massive scale. The racism, sexism, ageism, and other biases that permeate our culture also permeate the data collected about us and in turn the AI systems that are trained on that data.
On a conceptual level, it is important to note that although algorithmic bias often reflects unlawful discrimination, it does not constitute unlawful discrimination per se. Bias also includes the broader category of unfair or unexpected inequitable outcomes. While these may not amount to illegal discrimination of protected classes, they may still be problematic for organizations, leading to other types of liability or significant reputational damage. And unlawful algorithmic bias puts companies at risk of serious liability under cross-jurisdictional anti-discrimination laws.[3] This highlights the need for organizations to adopt methods that test for and mitigate bias on the basis of legal precedent.
Because today’s AI systems learn from data generated—in some way—by people and existing systems, there can be no unbiased AI system. If an organization is using AI systems to make decisions that could potentially be discriminatory under law, attorneys should be involved in the development process alongside data scientists. Those anti-discrimination laws, while imperfect, provide some of the clearest guidance available for AI bias problems. While data scientists might find the stipulations in those laws burdensome, the law offers some answers in a space where answers are very hard to find. Moreover, academic research and open-source software addressing algorithmic bias is often published without serious consideration of applicable laws. So, organizations should take care to ensure that their code and governance practices with respect to identifying and mitigating bias have a firm basis in applicable law.
Organizations are also at risk of over-indexing on bias while overlooking other important types of risk. Issues of data privacy, information security, product liability, and third-party risks, as well as the performance and transparency problems discussed in previous sections, are all critical risks that firms should, and eventually must, address in bringing robust AI systems to market. Is the system secure? Is the system using data without consent? Many organizations are operating AI systems without clear answers to these questions. Look for bias problems first, but don’t get outflanked by privacy and security concerns or an unscrupulous third party.
IV. There Is More to AI System Performance Than Accuracy
Over decades of academic research and countless hackathons and Kaggle competitions, demonstrating accuracy on public benchmark datasets became the gold standard by which a new AI algorithm’s quality is measured. ML performance contests such as the KDD Cup, Kaggle, and MLPerf have played an outsized role in setting the parameters for what constitutes “data science.”[4] These contests have undoubtedly contributed to the breakneck pace of innovation in the field. But they’ve also led to a doubling-down on accuracy as the yardstick by which all applied data science and AI projects are measured.
In the real world, however, using accuracy to measure all AI is like using a yardstick to measure the ocean. It is woefully inadequate to capture the broad risks associated with making impactful decisions quickly and at web-scale. The industry’s current conception of accuracy tells us nothing about a system’s transparency, fairness, privacy, or security, in addition to presenting a limited representation of what the construction of “accuracy” itself claims to measure. In a seemingly shocking admission, forty research scientists added their names to a paper demonstrating that accuracy on test data benchmarks often does not translate to accuracy on live data.
What does this mean for attorneys? Attorneys and data scientists need to work together to create more robust ways of benchmarking AI performance that focus on real-world performance and harm. While AI performance and legality will not always be the same, both professions can revise current thinking to imagine performance beyond high scores for accuracy on benchmark datasets.
V. The Hard Work Is Just Beginning
Unfortunately at this stage of industry and development, there are few professional standards for AI practitioners. Although AI has been the subject of academic research since at least the 1950s, and it has been used commercially for decades in financial services, telecommunications, and e-commerce, AI is still in its infancy throughout the broader economy. This too presents an opportunity for lawyers. Your organization probably needs AI documentation templates, policies that govern the development and use of AI, and ad hoc guidance to ensure different types of AI systems comply with existing and near-future regulations. If you’re not providing this counsel, technical practitioners are likely operating in the dark when it comes to their legal obligations.
Some researchers, practitioners, journalists, activists, and even attorneys have started the work of mitigating the risks and liabilities posed by today’s AI systems. Indeed, there are statistical tests to detect algorithmic discrimination and even hope for future technical wizardry to help mitigate against it. Businesses are beginning to define and implement AI principles and make serious attempts at diversity and inclusion for tech teams. And laws like ECOA, GDPR, CPRA, the proposed EU AI regulation, and others form the legal foundation for regulating AI. However, technical mitigation attempts still falter, many fledgling risk mitigations have proven ineffective, and the FTC and other regulatory agencies are still relying on general antitrust and unfair and deceptive practice (UDAP) standards to keep the worst AI offenders in line. As more organizations begin to entrust AI with high-stakes decisions, there is a reckoning on the horizon.
Author Information
Aaina Agarwal is Counsel at bnh.ai, where she works across the board on matters of business guidance and client representation. She began her career as a corporate lawyer for emerging companies at a boutique Silicon Valley law firm. She later trained in international law at NYU Law, to focus on global markets for data-driven technologies. She helped to build the AI policy team at the World Economic Forum and was a part of the founding team at the Algorithmic Justice League, which spearheads research on facial recognition technology.
Patrick Hall is the Principal Scientist and Co-Founder of bnh.ai, a DC-based law firm specializing at the intersection of AI and data analytics. Patrick also serves as visiting faculty at the George Washington University School of Business. Prior to co-founding bnh.ai, Patrick led responsible AI efforts at the high-profile machine learning software firm H2O.ai, where his work resulted in one of the world’s first commercial solutions for explainable and fair machine learning.
Sara Jordan is Senior Researcher of AI and Ethics at the Future of Privacy Forum. Her profile includes privacy implications of data sharing, data and AI review boards, privacy analysis of AI/ML technologies, and analysis of the ethics challenges of AI/ML. Sara is an active member of the IEEE Global Initiative on Ethics for Autonomous and Intelligent Systems. Prior to working at FPF, Sara was faculty in the Center for Public Administration and Policy at Virginia Tech and in the Department of Politics and Public Administration at the University of Hong Kong. She is a graduate of Texas A&M University and University of South Florida.
Brenda Leong is Senior Counsel and Director of AI and Ethics at the Future of Privacy Forum. She oversees development of privacy analysis of AI and ML technologies, and manages the FPF portfolio on biometrics and digital identity, particularly facial recognition and facial analysis. She on privacy and responsible data management by partnering with stakeholders and advocates to reach practical solutions for consumer and commercial data uses. Prior to working at FPF, Brenda served in the U.S. Air Force. She is a 2014 graduate of George Mason University School of Law.
Disclaimer: bnh.ai leverages a unique blend of legal and technical expertise to protect and advance clients’ data, analytics, and AI investments. Not all firm personnel, including named partners, are authorized to practice law.
[1] Commentators have often used the image of Russian nesting (Matryoshka) dolls to illustrate these relationships: AI includes machine learning, and machine learning, in turn, includes deep learning. Machine learning and deep learning have risen to the forefront of commercial adoption of AI in applications areas such as fraud detection, e-commerce, and computer vision. See, e.g., The Definitive Glossary of Higher Mathematical Jargon, MATH VAULT (last accessed Mar. 4, 2021), https://mathvault.ca/math-glossary/#algo; Eda Kavlakoglu, AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference?, IBM BLOG (May 27, 2020), https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks.
[2] In recent work by the National Institute for Standards and Technology (NIST), interpretation is defined as a high-level, meaningful mental representation that contextualizes a stimulus and leverages human background knowledge. An interpretable AI system should provide users with a description of what a data point or model output means. An explanation is a low-level, detailed mental representation that seeks to describe some complex process. An AI system explanation is a description of how some system mechanism or output came to be. See David A. Broniatowski, Psychological Foundations of Explainability and Interpretability in Artificial Intelligence (2021), https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=931426.
[3] For example, The Equal Credit Opportunity Act (ECOA), The Fair Credit Reporting Act (FCRA), The Fair Housing Act (FHA), and regulatory guidance, such as the Interagency Guidance on Model Risk Management (Federal Reserve Board, SR Letter 11–7). The EU Consumer Credit Directive, Guidance on Annual Percentage Rates (APR), and General Data Protection Regulation (GDPR) serve to provide similar protections for European consumers.
[4] “Data science” tends to refer to the practice of using data to train ML algorithms, and the phrase has become common parlance for companies implementing AI. The term dates back to 1974 (or perhaps further), coined then by the prominent Danish computer scientist Peter Naur. Data science, despite the moniker, is yet to be fully established as a distinct academic discipline.
Event Report: From “Consent-Centric” Frameworks to Responsible Data Practices and Privacy Accountability in Asia Pacific
On September 16, the Asia-Pacific office of the Future of Privacy Forum (FPF) held its first event following its launch in August 2021. This event was hosted by the Personal Data Protection Commission (PDPC) of Singapore during the very popular “Personal Data Protection week” (PDP Week 2021).
The theme of the event was Exploring trends: From “consent-centric” frameworks to responsible data practices and privacy accountability in Asia Pacific, and it is that of a larger project, carried out jointly by FPF with the Asian Business Law Institute (ABLI) across 14 Asian jurisdictions. The event was also co-organized by ABLI and FPF in the context of a cooperation agreement which was signed by the two organisations in August 2021.
This post summarizes the discussions in the two stellar panels featuring regulators, thought leaders, and practitioners from across the region, and highlights key takeaways:
Consent requirements which apply to the collection and processing of personal data, “notice & choice”, exceptions and alternatives to those requirements, combined, form an area where regulatory coherence is most needed in Asia-Pacific (APAC). Over-reliance on consent has led to the development of a “tick-the-box approach” to data protection, consent fatigue, and unnecessary compliance costs due to contradictory requirements in Asia Pacific.
Modern data protection laws should shift the onus of data protection from users to organizations, by promoting an accountability-based approach to data protection over a “consent-centric” one. Different avenues may be used to rebalance consent and privacy accountability in APAC, including through concepts such as legitimate interests, compatible uses and equivalent notions.
Making consent meaningful again in APAC can happen in a variety of ways, which include winding back the range of circumstances in which consent is sought; requiring consent only where it can be given thoughtfully, sparingly and with understanding; supporting enhanced transparency and consent through UX and UI design, with due attention brought to the different needs and literacy levels of users.
Harmonization is illusory in the face of Asia’s extreme diversity, but a bottom-up approach to convergence can work in the context of regional cooperation.
1. Repositioning consent requirements in APAC’s fragmented data protection landscape
Dr. Clarisse Girot, Director of FPF Asia Pacific and ABLI Senior Fellow, opened the discussion by explaining that the issue of a comparative look at “consent” requirements across the region was chosen as a key topic following suggestions from a vast network of stakeholders. Feedback showed that consent requirements which apply to the collection and processing of personal information, the “Notice & Choice” principle, exceptions and alternatives to those requirements, combined, form an area where regulatory coherence is most needed in the Asia Pacific (APAC) region.
In practice, the cumulative application of consent requirements for data processing in the region has led to the development of a “tick-the-box approach” to data protection in many jurisdictions. However, in APAC as elsewhere, overreliance on consent as a lawful ground by organisations has led to a general consent fatigue and unnecessary compliance costs, due to contradictory requirements.
An agreement is therefore forming across Asian jurisdictions that modern data protection laws should shift the onus of data protection from users to organizations, by promoting an accountability-based approach to data protection over a “consent-centric” one. This triggers a need to relativize the role of consent and to bring it back to the place which had been initially assigned to it by the very first data protection frameworks — namely, as one among many elements in a regulatory ecosystem which generally seeks to balance the role and interests of individuals, the responsibility of organizations, and broader social and societal interests with regard to the processing of personal data.
The main goal of the workshop was therefore to identify similar discussions that are taking place in multiple jurisdictions in APAC and to explore the possibilities of convergence among them. The discussion will also feed a joint comparative study with recommendations for convergence on consent and related data protection requirements, which will be published jointly by FPF and ABLI before the end of the year.
Both panels were composed of data protection professionals from different APAC jurisdictions and disciplines. Each speaker contributed with an original and expert point of view that could help identify commonalities, pathways for interoperability between Asian data protection frameworks, and concrete solutions to provide meaningful data protection to individuals — with or without consent.
Such reflections and recommendations are particularly timely at a time when key jurisdictions in Asia, including India, Indonesia, Thailand, Vietnam, Hong Kong SAR, Malaysia, Australia, are adopting new data protection frameworks or amending their laws, and new laws or major amendments recently came into force in jurisdictions like Thailand, Korea, New Zealand, China, or Singapore.
2. Rebalancing consent and privacy accountability
The title of the first panel was “Switching from a consent-centric approach to privacy accountability: a comparative view of APAC data protection laws”.
The panel was moderated by Yeong Zee Kin, Assistant Chief Executive, Infocomm Media Development Authority (IMDA), and Deputy Commissioner, PDPC, Singapore, with the inputs of Peter Leonard, Principal and Director at Data Synergies, Sydney, Takeshige Sugimoto, Managing Director at S&K Brussels, Tokyo, Shinto Nugroho, Chief Public Policy and Government Relations at Gojek, Jakarta, and Marcus Bartley-Johns, Asia Regional Director, Government Affairs and Public Policy at Microsoft, Singapore.
The goal of this first panel was to identify commonalities and pathways for interoperability between Asian data protection frameworks with regard to balancing the protection of individuals, accountability, and broader social and societal interests. This includes the role of consent, lawful grounds to process personal data, and/or other privacy principles in jurisdictions which do not contain provisions on “lawfulness” of processing.
The most important points highlighted during the discussion were the following:
2.1 How to achieve convergence across APAC’s fragmented and diverse landscape?
As an introductory note, Yeong Zee Kin stressed that APAC jurisdictions take different approaches towards privacy and data protection, but also that their laws are in different stages of developments (e.g., Japan and South Korea have had privacy laws for a long time, while Singapore, Philippines and Malaysia are more recent players). One may add that data protection or privacy laws follow different structures and not all modelled on EU GDPR, hence some key provisions (e.g. on “lawfulness” of data processing) have no equivalent in other jurisdictions.
A challenge which is endemic in APAC is therefore to identify a common ground in order to achieve convergence, while respecting the different inspirations and the particular culture that are enshrined in each jurisdiction’s privacy laws.
This raises a key question for participants, which is whether APAC stakeholders should aim for harmonisation or for more targeted actions of convergence, for instance through the mutual recognition of specific legal standards.
2.2 Over-reliance on consent and need for alternatives
Speakers highlighted that APAC-based organisations tend to overly rely on consent, even in cases where another solution or legal basis would be available and more appropriate. The potential consequence of such a practice is the erosion of the value of consent.
A view expressed by Peter Leonard and shared across the panel was that consent, “informational self-determination”, or “citizen self-management” of privacy settings, remains important. However, anyone should only be expected to self-manage what is realistically manageable by them. The need is felt to address both the frequency of consent requests and reduce the level of noise in privacy policies and collection notices, as well as rethink the role of privacy policies and collection notices.
Among “noise reduction measures”, he specifically cited appropriately targeted exceptions, whether through legitimate interests, industry codes or standards, class exemptions by regulators, or new generic concepts such as “compatible data practices”, in such a way that the control of individuals over their personal data is not adversely affected. As a baseline, moving away from consent requires recognizing the importance of concepts like “reasonableness” or “fairness” to support the alternative requirements of data protection laws.
Unambiguous express consent should remain necessary for categories of processing that create higher risk of privacy harm to individuals, in particular for manifestly sensitive data, including data about children, processing which directly contradict individuals’ rights and interests, or cannot reasonably be expected by them. This may also tie in with the concept of “no-go zones” as it has been developing in Canada, and which has gained some popularity in Australia.
2.3 Varying approaches and interpretations in different jurisdictions: Japan, Indonesia, Vietnam
Another point raised by the moderator and panellists was that material differences in the protections awarded by legal systems in APAC countries may hinder the path towards harmonisation. There is therefore a need to better understand how each law works before proposing solutions for convergence, so that they can be meaningful for all.
Takeshige Sugimoto commented on the “consent by default” situation which currently prevails in Japan. He noted that the Japanese data protection law (APPI) does not have a “legitimate interest” legal basis, but that—contrary to a common belief—it does not take a consent-centric approach either. Rather it permits processing of personal data based on the “business necessity” ground, as long as the data subject may reasonably expect the intended further usage of his or her data. The boundaries of permissible processing under APPI are therefore similar to GDPR, even without “legitimate interests” as a legal basis. In its adequacy decision on Japan, the European Commission actually states that the Japanese system also ensures that personal data is processed lawfully and fairly through the purpose limitation principle.
Sugimoto also mentioned the Japanese Personal Information Protection Commission (PPC Japan) guidelines, which list limited cases where consent must be sought, while pointing to other areas which are open to other legal bases and authorisation from the PPC. In other words, in his view there would be no significant difference, in practice, between what GDPR considers legitimate interests-based processing, and what APPI considers lawful processing.
Shinto Nugroho presented the situation in ASEAN from the perspective of Gojek, Indonesia’s first decacorn and SuperApp, with operations in Indonesia, Vietnam, Singapore, Thailand, and Philippines. Nugroho’s particular focus was on the challenges of operationalizing consent in times of crisis, like in the current Covid-19 pandemic. She noted that in its current state Indonesia’s data protection legislation is quite consent-centric, but that the draft Data Protection Bill to be soon adopted by the Parliament of Indonesia mentions consent as only one of seven available lawful grounds for processing personal data (others including contract, performance of a legal obligation and legitimate interests).
Nugroho welcomed this development. She explained how consent as a legal ground is neither always practical for controllers nor protective for individuals, and in fact sometimes even harmful for citizens. For instance, in Indonesia, out of 170 million inhabitants roughly 160 millions are eligible for vaccination against Covid. Gojek has secured massive vaccination slots from the governments, namely for its drivers who are frontliners. However, the government requires that everyone be registered in the public vaccination system first, for which consent is required. But not everyone has access to the Internet or has the literacy required to get registered; moreover, the vaccination register itself is work in progress. Securing 100% opt-in consent from millions of drivers to be registered in the scheme is not only going to slow down the process, but the drivers are also going to miss the notification, or fail to complete their registration. In such cases, for Gojek the most adequate legal basis to get the drivers registered would be its “legitimate interests” as an employer, together with clear purposes, and adequate transparency over mere consent. The consideration that drivers are exposed to a high risk of contamination at a time when the epidemic is hitting the country should override the need to obtain consent.
Lastly, Nugroho mentioned the ongoing discussions on the future Data Protection Decree of Vietnam, to be adopted imminently. The Decree does not provide for a legitimate interests basis, but at least similarly allows controllers to collect and process data on grounds other than consent (such as security, when permitted under the law, and research). Discussions on convergence must therefore factor in the fact that APAC data protection laws can vary even between neighbouring countries which have commonly drawn inspiration from similar sources (primarily EU GDPR) to draft their future comprehensive data protection frameworks.
2.4 Transparency & choice as trust enablers
Marcus Bartley-Johns welcomed the fact that the discussions enabled to introduce nuances in the conversation, because “making consent meaningful again” is a journey and for that we must avoid binary approaches (“for, or against consent”). He also concurred with Takeshige Sugimoto that laws and regulations can go in one direction, but business practices and embedded behaviors can go in another, and these variations are a key part of the discussion around consent.
Bartley-Johns shared a few data points on what consent means in the region. In 2019, Microsoft ran a survey on 6300 consumers across Asia on consumer perception of trust; 53% of the persons surveyed said that they had had a negative trust experience related to privacy when using a digital service in the region. Younger people reported a higher share of negative experience, and more than half of those said they would switch services if their trust was breached. Bartley-Johns added that the fact that consumers have reasons to be wary should be acknowledged, one of those reasons being the excessive difficulty for individuals to find out and understand how their data is being collected and used.
Another data point is in relation to the use of the privacy dashboard which enables Microsoft users globally to see and control their activity data including location, search, browsing data across multiple services. 51 million unique visitors have used that dashboard since its launch in May 2018 (19 million people in 2020). Japan, China, Australia, India and Korea feature in the top 20 markets from where users have been using the dashboard. In other words, the speaker stated that Microsoft’s experience shows that consumers wish to know what personal data is collected about them and exercise their options and rights when they are given an opportunity to have their say.
Following up on this point, Peter Leonard added that transparency plays a double role: on the one hand, it allows individuals to know how their data are being used, while at the same time provides safeguards against deceptive and manipulative statements by organisations, where appropriate “do only what you say” laws are in place at a local level.
2.5 “Legitimate interests” in context
On the whole, all the speakers expressed their support for the development of the concept of legitimate interests or equivalent concepts in APAC laws. The adoption across more privacy laws of alternative grounds for processing personal data, notably legitimate interests, is one of the potential areas for strengthening privacy regulatory coherence in the region. Microsoft for instance has advocated for this necessity in a recent policy paper calling for strengthening privacy regulatory coherence in Asia.
Speakers noted that a problem in APAC of increased reliance upon legitimate interests as an alternative for consent is that lists of legitimate interests are varied and jurisdiction-specific. This means that entities operating across borders and seeking a common denominator in their privacy policies and requests for consent will continue to be incentivised to overly rely upon consent, unless they are given some certainty on where the lawmakers and regulators are likely to use this notion. Convergence can be strengthened by the adoption of regulatory guidelines on implementing this approach and information sharing on their implementation.
Peter Leonard contributed by stating that, to make the legitimate interests lawful ground work in APAC, there could be a need for a mutual recognition scheme in the region of the differing definitions and approaches to the legitimate interests. According to him, this will not lead to absolute convergence, but will allow reaching a compromise that takes stock of local legal systems and cultures in diverse Asia. Failing this, we will have data controllers who will keep using consent as a common denominator by default.
In the view of Takeshige Sugitomo, having a compilation of use cases that clarify whether legitimate interests or consent would be the most appropriate legal basis in each case, would help achieve a more holistic regional approach. This could lead to international consensus on specific use cases which would be more efficient than awaiting joint regulatory guidance which might take years to be issued.
Marcus Bartley-Johns suggested that it would be valuable to check if the consensus that emerged from this panel could emerge in the regional and global regulatory community. This is important as more regulations and guidance have emerged in the last months in Asia which tends to make transparency or consent requirements even more prescriptive. In this respect, there would be real value in obtaining practical guidance from regulators on these issues, like PDPC has done, with indicative examples, use cases and scenarios that will give a basis for a more holistic approach to balancing consent and other approaches in the region.
Seconding the comments by Sugitomo and Bartley-Johns, Yeong Zee Kinindicated that one of the sources of inspiration for drafting PDPC’s guidelines on legitimate interests in the recently amended PDPA was the FPF’s report on legitimate interests in the EU, which provides a compilation of guidance or decisions by regulators and court cases where the scope of the legitimate interests lawful ground was clarified. He suggested that the right way forward would probably be to identify real world examples and use cases where a regional or global consensus can be reached on situations where we do not need consent, and the next step will be for regulators to start contextualizing the end result depending on their respective legal systems (necessity, reasonableness, legitimate interests, contractual necessity, vital interests, etc.).
The moderator suggested that FPF and other stakeholders contribute to building this library of “legitimate interests”, and that regulators could do their part by going out to their local industry and looking for such use cases. Subscribing to a remark by Peter Leonard, however, he acknowledged that in the broad spectrum of different cultures and histories in Asia, complete harmonization is not realistic. In contrast, taking a practical bottom-up approach to convergence might get us somewhere and we should seek to build on consensus as and when we find them, for instance bilaterally, between like-minded partners, and maybe more slowly, on a regional level.
3. Making consent meaningful (again)
The title of the second panel was “Shaping choices in the digital world: how consent can become meaningful again”. The panel was moderated by Rajesh Sreenivasan, Head, Technology Media and Telecoms Law Practice, Rajah & Tann Singapore LLP. It further included interventions by Anna Johnston, Principal, Salinger Privacy (Sydney), Malavika Raghavan, Visiting Faculty, Daksha Fellowship and FPF Senior Fellow for India, Rob van Eijk, FPF Europe Managing Director and Edward Booty, Founder and CEO of reach52.
Rajesh Sreenivasan started by saying that the problem with consent lies not on the concept itself but on the way this legal ground has been used for processing personal data. Especially in APAC, where multiple jurisdictions have very different approaches, he mentioned that obtaining meaningful consent requires answering two questions first: 1) Meaningful consent for whom: for the data subject or for the organisation?, and 2) Meaningful how? Additionally, the moderator openly asked participants whether in their view it was more pressing to make consent meaningful or to build alternative models for fair data processing, as consent might have become redundant in today’s context, at the speed at which data is being used.
3.1 Are current online consent-seeking practices fair?
Anna Johnston kicked off by supporting a burden shift, away from individuals and onto organisations, when it comes to consent standards. According to her, consent has almost lost its true meaning because it has been so over-used as a promise — in her own words, it has become like “your cheque is in the mail”!
The situation in Australia as she sees it is that consent is over-relied on, but also under-enforced. There is guidance from the Australian Privacy Commissioner (the OAIC) and there is case-law to back up that guidance, that consent in Australian law is similar to the GDPR: it cannot be bundled up with other things, it cannot be included in mandatory Terms and Conditions, in a Privacy Policy, it cannot even be “opted out” – consent as a lawful basis on which to collect, use or disclose personal information has to be the customer’s clear “opt in” choice, made freely, separate from all other choices. However the law is under-enforced, and so it is still very common to see business practices which follow a model of “bury the customer in fine print and make them agree to something we know they won’t even read”, and then claim that the customer has consented to something.
Surveys conducted by the OAIC actually suggest that only 20% of Australians feel confident that they understand privacy policies when they are actually reading them. Recently the Australian consumer and competition regulator, the ACCC, has called out this kind of power imbalance and these kinds of behaviours from the Big Tech platforms, and recommended that the Privacy Act should be amended, to make the standard required for consent much clearer in the law.
3.2 The boundaries of consent’s role
Overall, speakers agreed that there is a need to “make consent meaningful again”, primarily by winding back the range of circumstances in which consent is sought by organisations. Consent should only be required, and sought, where it can be given thoughtfully, sparingly and with understanding. Consent is only real consent where an individual has a real choice [note: an increasing number of data protection laws in Asia recognize the concept of “free and unbundled consent”]. A discussion is needed about when requiring consent is sensible, and how to ensure that capabilities of individuals to control their privacy settings are not compromised by any changes in consent requirements.
Winding back such requirements, to improve data privacy, may sound both radical and counter-intuitive. However, over both sessions an agreement has been formed that processing without consent should only be recommended if the processing is aligned with the ordinary expectations or direct interests of data subjects, and without ever overriding a requirement for transparency.
Anna Johnston thus opined that there should be clear distinction between business activities that require or do not require consent. For example, activities that are outside customers’ expectations should require consent (eg. asking someone to join a research project), whereas the same would not apply to unobjectionable, fair and proportionate activities (such as including an individual in a customer database) nor to others with public interest backing. She concluded by adding that there should also be a list of activities that are prohibited even if consent is given, including cases of profiling children for marketing purposes.
In his presentation, FPF’s Manager for Europe Dr. Rob van Eijk concurred and added that a lot of the debate on the consequences of the datafication of society has been around limiting the collection of data but also on its further use. Consent is one of the ways in which to regulate these two “gateways”, and if we look at that there are multiple ways in which we can ensure that everyone is on board. In practice, however, much of the burden is on the users to read and comprehend what is being put forward. This aspect was the key focus of this year’s Dublin Privacy Symposium organized by FPF, entitled Designing for Trust: Enhancing Transparency & Preventing User Manipulation.
An important point made during the symposium is that organisations should be proactive in increasing transparency from a design perspective so as to present users with a real choice and encourage them to make deliberate decisions. Understandability, how people read through the information, for instance, can be tested through technology in the online space. Another important point is that organizations should ask themselves whether they should be collecting all the envisaged data in the first place (in line with the minimisation principle).
They must also take active steps to prevent user manipulation not only in designing consent solutions (for instance through cookie banners), but also when they process data through machine learning algorithms. Finally, the question of vulnerable groups should be factored in the design of UX/UI (“have we left any groups behind?”). A lot can be done to make things more understandable. And this of course leads to the question of the extent to which the expression of choice can be embedded in the technology.
3.3 Dealing with users with different needs and literacy levels in APAC
The Asia Pacific region is a region of contrasts, especially in terms of literacy levels, including financial and health literacy, due among other reasons to different educational levels and the wide linguistic variety that exist in some countries.
FPF’s Malavika Raghavan made comments and shared findings issued from her research and extensive field work done in India, to explore how the mental models of internet users in India impact these discussions on consent, with a particular focus on the financial sector (eg. loan applications). She underlined the importance of understanding the context of non-Western users, particularly new generations of users in Asia, before even aiming at the design of laws and practices for obtaining meaningful online consent.
For instance, Raghavan pointed to surveys that showed that many mainstream Indian users – i.e. modest-earning individuals from primarily rural areas – do not understand the differences between their mobile phones, the internet, online services and allied services like payment platforms, because they exclusively use them on their phones. Understanding this reality (how users have never used a computer, but only mobile phones with preloaded apps, free allowances, etc.) is key to start thinking about designing consent, or even policymaking around consent.
However, literacy is not necessarily a barrier, and it is not related to digital skills: highly proficient digital users might not be literate, and reciprocally. Moreover, a large number of Indian families often share their mobile devices, which means that consent in those scenarios should be considered given for a group of individuals rather than separate individuals: this mental model is very far from the mental models of a designer or policymaker. Asking for one-to-one consent in such circumstances might not make sense. But however disadvantaged, individuals still have strong ideas about how their data may be shared.
The limitations of consent have been analysed by Raghavan in particular in her work on the Data Empowerment and Protection Architecture (DEPA) and the Consent Layer developed by Indiastack, which seeks to enable secure and effective sharing of personal data with third party institutions in India by using the concept of “consent managers”. Raghavan highlighted in her work how cognitive limitations operate on individuals’ decision-making about their personal data and how the threat of denial of service can make “taking consent” a false choice. To be effective, therefore, such systems must be supported by strong accountability systems and access controls that operate independent of consent. Relying solely on consent is not a good idea, as a wealth of data protection and consumer protection thinking has shown that consent is necessary but not sufficient for data protection.
Moreover, the panellist concluded, coders and digital platform designers should consider users’ perceptions, literacy and context when setting up online services. The law alone cannot fix what has been broken by technology. This, according to Raghavan, is particularly important in a jurisdiction where the highest judicial instances have recognized privacy as a fundamental right (such as India) and where users have strong ideas and reasonable expectations about how digital data flows occur. In said exercise, unbundling ancillary consent-needing data processing from online services’ terms and conditions should be front and center.
Edward Bootythen shared his experience as CEO of Reach52, a social enterprise and a growth-focused start-up that provides accessible and affordable healthcare for the 52% of the world without access to health services, with 5 key markets in Cambodia, Philippines, India, Indonesia, and Kenya.
Reach52 uses technology and community outreach to widen access to health services while simultaneously lowering their costs. Booty explained that his company is still small, but has accumulated a lot of sensitive data in the multiple countries in which they operate. He specifically shared about his experience in collecting health data and profiling residents for providing better care in remote rural communities in Philippines and Cambodia, and uncovering data-driven insights to inform more targeted, effective access to healthcare solutions. Although it is sometimes disheartening that some users do not care, not having legitimate consent from users in a data-driven business model constituted a risk to his start-up. Furthermore, reach52 still believes that it must help the people who use their services understand their rights around data collection and use, regardless of their education and literacy levels. Booty explained how consent was sought from individuals who provided their data for such a purpose, using video, visuals, and progressive disclosure, paying attention to the way terms are explained, and consent gained, so as not to fall short for people with low literacy and education levels. For this, support was obtained from Facebook accelerator and IMDA.
A specific challenge explained by Booty is that local and national government authorities were then coming to reach52 to obtain access to the datasets for a variety of purposes, notably to manage different humanitarian crises. The speaker shared that, as pressure from those authorities mounted, the organisation started working on ways to get more meaningful and granular consent from individuals for each of the needs that their data could serve. This involved engaging designers to deliver simple flyers with information to individuals about what could happen to their data after its collection, as well as about their data-related rights. The process included testing with different age groups to make the message intelligible for a wide audience.
3.4 How UX and UI can support enhanced transparency and consent
During the session, several times the idea was brought up that designers, and the improvement of the user experience and user interface (UX/UI), have an essential role to play in improving the regulation of architectures of choice.
In recent years, more academics and data protection regulators have underlined the fundamental role which UX/UI design can play for user empowerment and that design and interfaces must now make part of the compliance analysis. Universally-accepted icons could be a possible solution to improve intelligibility, said Anna Johnston. In her presentation, she argued that web designers should try to think with the mind of a user, by considering useful evidence and guidance on how to better design privacy notices, such as the UK Government’s piece on better privacy notice design.
Various ideas for improving privacy notices are modelled on successful designs used in safety messaging (like traffic light indicators), and product labelling (such as star ratings and nutrition labels). But this form of notice still does not work at scale. Anna Johnston expressed the view that the most innovative idea she has seen in this space comes from Data61, which is an arm of the Australian Government that has proposed machine-readable icons, based on Creative Commons icons from copyright law – these are universally agreed, legally binding, clear and machine-readable.
This latter suggestion was echoed by the findings of FPF’s Dublin Privacy Symposium on manipulative design practices, which were outlined by Dr. Rob van Eijk during the session. According to him, the Symposium’s speakers explained that providers should encourage users to make deliberate decisions online by avoiding so-called “dark patterns”, consider the needs of vulnerable groups (such as visually impaired or colour-blind users) and the best way of informing users where data collection devices do not have visual or audio interfaces (eg. IoT). Van Eijk added that cookie walls as they are developing in Europe may be a radical solution, as they prevent users from accessing content unless they agree to pay a fee or accept online tracking.
Conclusion
Commissioner Raymund Liboro, National Privacy Commissioner of Philippines, delivered the concluding remarks of the workshop.
To support the work of the FPF and ABLI and the discussions of the day, Commissioner Liboro evoked a topical case in the Philippines. In late August, his office ordered the take-down of money lending apps from the Google Play Store to sanction the practice of some online lending platforms. Such platforms harvested excessive information from their users without legitimate purpose through the use of unreasonable and unnecessary apps permissions, including saving and storing their clients’ contact list and photo gallery ostensibly to evaluate their creditworthiness. Yet an applicant’s creditworthiness may be determined through other lawful and reasonable means. Moreover, these apps have also been the subject of more than 2000 complaints of unauthorized use of personal data that resulted in harassment and shaming of borrowers before persons in their mobile devices’ contact list to collect debts.
Such behaviors and practices cannot be considered acceptable because users have supposedly given their “legitimate consent” to them, which was the companies’ first line of defence. This, Commissioner Liboro said, combined with the privacy paradox, urges the data protection community to reconsider the current regulatory paradigm which operates in Asia and globally. As policymakers now regulate in hyperscale – with encompassing laws coming up in China, India, Indonesia, Thailand, and so many countries in ASEAN hopping on, impacting millions of data subjects –, the current dependence on consent and paper compliance should be replaced with accountability and added onus on organisations to ensure and demonstrate compliance. Privacy accountability is a compelling force, and accountable organisations foster trust and thrive, said the Commissioner.
The workshop set the scene and informed the discussion around consent and accountability in the APAC jurisdictions. All participants agreed on the need to reconsider the use of the consent legal ground in the region. The datification of society as well as the global dimensions of privacy and data protection promise to urge policy makers to aim for convergence, while respecting the legal culture and approach of each separate jurisdiction.
Commissioner Liboro concluded the event by expressing his appreciation to everyone who participated in the discussions, and reminded the participants that this conversation aims at setting the foundations of a collective response that will benefit the privacy ecosystem in the Asia-Pacific region.
The next steps of the FPF ABLI project will be announced soon.
Brain-Computer Interfaces: Privacy and Ethical Considerations for the Connected Mind
View a report by FPF and IBM report focusing on BCI privacy and ethics here.
Introduction
Brain-computer interfaces (BCIs) are a prime example of an emerging technology that is spawning new avenues of human-machine interaction. Communication interfaces have developed from the keyboard and mouse to touchscreens, voice commands, and gesture interactions. As computers become more integrated into the human experience, new ways of commanding computer systems and experiencing digital realities have trended in popularity, with novel uses ranging from gaming to education.
Defining BCIs and Neurodata
BCIs are computer-based systems that directly record, process, analyze, or modulate human brain activity in the form of neurodata that is then translated into an output command from human to machine. Neurodata is data generated by the nervous system, composed of the electrical activities between neurons or proxies of this activity. When neurodata is linked, or reasonably linkable, to an individual, it is personal neurodata.
BCI devices can be either invasive or non-invasive. Invasive BCIs are installed directly into—or on top of—the wearer’s brain through a surgical procedure. Today, invasive BCIs are mainly used in the health context. Non-invasive BCIs rely on external electrodes and other sensors or equipment connected to the external surface of the head or body, for collecting and modulating neural signals. Consumer-facing BCIs primarily use various non-invasive methods, including headbands.
Key Applications and Top-of-Mind Privacy and Ethical Challenges
Some BCI implementations raise few, if any, privacy issues. For example, individuals using BCIs to control computer cursors might not not reveal any more personal information than typical mouse users, provided BCI systems promptly discard cursor data. However, some uses of BCI technologies raise important questions about how laws, policies, and technical controls can safeguard inferences about individuals’ brain functions, intents, or emotional states. These questions are increasingly salient in light of the expanded use of BCIs in:
Gaming – where BCIs augment existing gaming platforms and offer players new ways to play using devices that record and interpret their neural signals.
Employment – where BCIs monitor workers’ engagement to improve safety during high-risk tasks, alert workers or supervisors of dangerous situations, modulate workers’ brain activity to improve performance, and provide tools to more efficiently complete tasks.
Education – where BCIs can track student attention, identify students’ unique needs, and alert teachers and parents of student learning progress.
Neuromarketing – where marketers incorporate the use of BCIs to intuit consumers’ moods, and to gauge product and service interest.
Military – where governments are researching the potential of BCIs to help rehabilitate soldiers’ injuries and enhance communication.
It is important for stakeholders in this space to delineate between the current and near future uses and the far-distant notions depicted by science fiction creators. The realistic view of capabilities is necessary to credibly identify urgent concerns and prioritize meaningful policy initiatives. While the potential uses of BCIs are numerous, BCIs cannot at present or in the near future “read a person’s complete thoughts,” serve as an accurate lie detector, or pump information directly into the brain.
As BCIs evolve and are more commercially available across numerous sectors, it is paramount to understand the real risks such technologies pose. BCIs raise many of the same risks posed by home assistants, medical devices, and wearables, but implicate new and heightened risks associated with privacy of thought, resulting from recording, using, and sharing a variety of neural signals. Risks include, but are not limited to:
Collecting, and potentially sharing, sensitive information related to individuals’ private emotions, psychology, or intent;
Combining neurodata with other personal information to build increasingly granular and sensitive profiles about users for invasive or exploitative uses, including behavioural advertising;
Making decisions that significantly impact patients, employees, or students based on information drawn from neurodata (with potential but distinct risks if the conclusions are accurately, or inaccurately drawn);
Security breaches compromising patient health and individual safety and privacy;
A lack of meaningful transparency and personal control over individuals’ neurodata; and
Surveilling individuals based on the collection of sensitive neurodata, especially from historically and heavily surveilled communities.
These technologies also raise important ethical questions around fairness, justice, human rights, autonomy, and personal dignity.
A Mix of Technical and Policy Solutions Is Best for Maximizing Benefits While Mitigating Risks
To promote privacy-protective and ethical uses of BCIs, stakeholders should adopt technical measures including but not limited to:
Providing hard on/off controls whenever possible;
Providing granular user controls on devices and in companion apps for managing the collection, use, and sharing of personal neurodata;
Operationalizing best practices for security and privacy when storing, sharing, and processing neurodata including:
Encrypting sensitive personal neurodata in transit and at rest; and
Embracing appropriate security measures to combat bad actors.
Stakeholders should also adopt policy safeguards including but not limited to:
Rethinking transparency, notice, terms of use, and consent frameworks to empower users with a baseline of BCI literacy around the collection, use, sharing, and retention of their neurodata;
Engaging IRBs, corporate review boards, ethical oversight, and other independent review mechanisms to identify and mitigate risks;
Facilitating participatory and inclusive community input prior to and during BCI development and rollout;
Creating dynamic technical, policy, and employee training standards to account for the gaps in current regulation; and
Promoting an open and inclusive research ecosystem by encouraging the adoption, where possible, of open standards for the collection and analysis of neurodata and the sharing of research data under open licenses and with appropriate safeguards in place.
Conclusion
Because the neurotechnology space is especially future-facing, developers, researchers, and policymakers will have to create best practices and policies that consider existing concerns and strategically prioritize future risks in ways that balance the need for proactive solutions while mitigating misinformation and hype. BCIs will likely augment and complicate many existing technologies that are currently on the market, and privacy professionals will have to stay abreast of recent developments to protect this quickly growing space.
*Image courtesy of Gerd Altmann from Pixabay
Call for Nominations: 12th Annual Privacy Papers for Policymakers
The Future of Privacy Forum invites privacy scholars and authors with an interest in privacy issues to submit finished papers to be considered for FPF’s 12th annual Privacy Papers for Policymakers Award. This award provides researchers with the opportunity to inject ideas into the current policy discussion, bringing relevant privacy research to the attention of the U.S. Congress, federal regulators, and international data protection agencies.
The award will be given to authors who have completed or published top privacy research and analytical work in the last year that is relevant to policymakers. The work should propose achievable short-term solutions or new means of analysis that could lead to real world policy solutions.
FPF is pleased to also offer a student paper award for students of undergraduate, graduate, and professional programs. Student submissions must follow the same guidelines as the general PPPM award.
We encourage you to share this opportunity with your peers and colleagues. Learn more about the Privacy Papers for Policymakers program and view previous year’s highlights and winning papers on our website.
FPF will invite winning authors to present their work at an annual event with top policymakers and privacy leaders in February 2022 (date TBD). FPF will also publish a printed digest of the summaries of the winning papers for distribution to policymakers in the United States and abroad.
Learn more and submit your finished paper by October 15th, 2021. Please note that the deadline for student submissions is November 5th, 2021.
Upcoming data protection rulings in the EU: an overview of CJEU pending cases
There has been a surge in questions posed by national courts to the Court of Justice of the EU (CJEU) in the past year on how various provisions of the General Data Protection Regulation (GDPR) should be interpreted and applied in practice. They vary from understanding essential aspects of the fundamental right to the protection of personal data, such as the scope of one’s right to access their own data or the appropriate lawful ground for complex processing like profiling and personalized advertising, to systemic questions such as the interplay of competition law and data protection law in digital markets. They also seek to dispel enforcement conundrums, such as identifying and quantifying non-material damages for breaches of the GDPR or clarifying the ne bis in idem principle for cases under the parallel purview of Data Protection Authorities and national courts.
According to the EU Treaties, EU Member-States’ courts may – or, in case no appeal from their decisions is possible, must – ask the CJEU to rule on the interpretation and validity of disputed provisions of EU law. Such decisions are known as preliminary rulings, by which the CJEU expresses its ultimate authority to interpret EU law and which are binding for all national courts in the EU when they apply those specific provisions in individual cases.
Since May 2018 – when the GDPR became applicable across the EU -, the CJEU has played an important role in clarifying the meaning and scope of some of its key concepts. For instance, the Court notably ruled that two parties as different as a website owner that has embedded a Facebook plugin and Facebook may be qualified as joint controllers by taking converging decisions (Fashion ID case), that consent for online data processing is not validly expressed through pre-ticked boxes (Planet49 case) and that the European Commission Decision to grant adequacy to the EU-US Privacy Shield framework is invalid as a mechanism for international data transfers, and supplemental measures may be necessary to lawfully transfer data outside of the EU on the basis of Commission-vetted model clauses (in the Schrems II case).
Ever since the enactment of the 1995 EU Data Protection Directive, the CJEU had a prominent role in expanding the scope of protection afforded to individuals by data protection law, in a way that ultimately influenced the text of the GDPR. Some notable examples include landmark rulings on the definition of personal data (in Breyer and Nowak), the lawfulness of transferring data to countries outside of the EU (in Schrems I) and the so-called “right to be forgotten” (in Google Spain).
What are the questions that the Court is asked to clarify next? This overview includes a preview of the most interesting cases where the CJEU is expected to weigh in. The analysis focuses on questions that are relevant from the perspective of commercial data use, meaning that novel questions about personal data processing in the context of law enforcement, passenger name records and national elections have not been included in the overview. Table 1 below contains a list of links to the relevant cases as submitted to the CJEU, allowing for a more comprehensive view.
1. Clarifying essential aspects of personal data protection: right of access; lawful grounds for processing data for targeted advertising
Both the very active Austrian Supreme Court of Justice (Oberster Gerichtshof) and the Austrian Federal Administrative Court (Bundesverwaltungsgericht) have sent questions to the CJEU about the information that controllers are required to hand over in response to data subjects’ access requests.
In March 2021, the former asked the EU’s highest court, in a case involving the Austrian Postal Office, whether under their right of access data subjects must be informed about the categories of recipients of their personal data even in the cases where specific recipients have not yet been determined, but disclosures to those recipients are planned for the future. Or should they only be informed about the categories of recipients with whom personal data was already shared?
More recently, in August 2021, the Federal Administrative Court sought clarifications from the CJEU regarding what obtaining a “copy of the personal data undergoing processing” means. In this respect, the Bundesverwaltungsgericht asks whether such a right entails receiving entire documents/database excerpts in which the personal data are included or a mere “faithful reproduction” of the personal data processed by the controller. If the latter is the case, the referring court also wishes to know if there are exceptions to the rule, for the benefit of data subjects’ comprehension. Lastly, Austrian judges also query whether the information that should be made available to data subjects in a “commonly used electronic format” is only the “copy” of the personal data or also all the elements of Article 15(1) GDPR (e.g., information about the purposes of the processing and data retention periods).
Two very complex sets of questions in cases involving processing of data for targeted advertising purposes on social media have also reached the CJEU in 2021. The Court’s answers are likely to shape the future of how social media companies and online advertising businesses process personal data in the EU.
The first one, from April, comes from the Higher Regional Court of Düsseldorf, Germany (Oberlandesgericht Düsseldorf), in a first case of its kind, combining antitrust and data protection enforcement (see also Section 3 below). In a case involving Facebook, the German court asks the CJEU if data collection through user interfaces placed on third party websites or apps that relate to Article 9(1) GDPR protected attributes (e.g., political party or health-related outlets) counts as processing special categories of data. Should people that visit such websites or apps or use the company’s plugins therein (e.g. “Like” buttons) be considered to have manifestly made their sensitive data public? As the European Data Protection Board (EDPB) has already provided guidance on these matters, it will be interesting to see to what extent the CJEU will endorse the EDPB’s interpretation or diverge from it.
The Oberlandesgericht Düsseldorf also seeks to clarify whether personal data may be lawfully collected and combined by the company when obtained from other Facebook Group services and third-party websites/apps to offer personalised content and advertising, under the “contract” or “legitimate interests” legal bases. In parallel, the court asks the CJEU to rule on whether GDPR-compliant consent may be effectively and freely expressed by users “to a dominant undertaking”.
These last questions resemble others that were posed more recently by the Austrian Oberster Gerichtshof. On July 20, 2021, the court essentially asked the CJEU (see an unofficial translation of the questions) to clarify whether the social media platform can rely on “contract” as lawful ground for processing personal data for personalized advertising, or whether it should rely on consent of its users under the GDPR (by asking against which of these two lawful grounds should the wording of its terms and conditions be assessed).
In addition to the consequential question about the appropriate lawful ground in this particular context, the Austrian court also invited the CJEU to clarify how the data minimisation and purpose limitation principles as provided by the GDPR should apply in the context of personalised online advertising, in particular when it comes to sensitive data.
2. Accountability and due diligence
The German Federal Labour Court’s (Bundesarbeitsgericht) reference of October 2020 invites the CJEU to shed light on the circumstances that may lawfully lead organisations to dismiss their appointed Data Protection Officers (DPOs). The EDPB DPO guidelines state that “a DPO could still be dismissed legitimately for reasons other than for performing his or her tasks as a DPO (for instance, in case of theft, physical, psychological or sexual harassment or similar gross misconduct)”. With this reference, the German court seeks to understand whether the CJEU shares the same view and, if so, whether Article 38(3) GDPR would preclude a German provision that forbids employers from terminating the employment relationship with their DPOs in all cases, including for reasons other than the performance of the latter’s tasks.
Additionally, the referring court asks the CJEU whether the GDPR limitations on dismissal also apply to those DPOs who are appointed pursuant to a domestic law obligation, where the GDPR itself does not require their appointment.
Looking at a different accountability-related obligation, in a June 2021 reference, the Bulgarian Supreme Administrative Court (Varhoven administrativen sad) wishes to know if the mere occurrence of a data breach is sufficient to ascertain that the controller has not implemented appropriate technical and organisational measures to prevent the breach. In case of a negative answer, the CJEU is asked to further provide a benchmark against which national courts may assess the appropriateness of the implemented measures.
3. Administrative enforcement: How far can DPAs go and do antitrust authorities play a role?
In a set of questions from March 2021, the Budapest Regional Court (Fővárosi Törvényszék) aims to ascertain how far the GDPR-prescribed independence and corrective powers of Data Protection Authorities (DPAs) go. While it seems to be clear that individuals and companies have a right to lodge a judicial appeal against DPAs’ decisions or their inaction (see Article 79 GDPR), the Hungarian court highlights situations where both DPAs and courts are simultaneously called by individuals to assess the lawfulness of the same data processing operations.
Should DPAs have priority competence to determine GDPR infringements? Or should both DPAs and Courts independently examine the existence of an infringement, possibly arriving at different conclusions? May a DPA find a GDPR breach where, in parallel proceedings, a court has found that there was no such breach? The CJEU is thus expected to clarify how the ne bis in idem principle manifests under the complex enforcement system of the GDPR.
In another case already mentioned above (see Section 1), the Oberlandesgericht Düsseldorf seeks to clarify the fundamental question of how antitrust law enforcement and data protection rules interact and whether antitrust regulators may play a role in safeguarding data protection law as part of antitrust proceedings.
This case started from a 2019 decision of the German federal antitrust authority (Bundeskartellamt) against Facebook. The authority found a breach of German competition law with regard to abuse of market dominance by also relying on GDPR provisions in its assessment. These findings primarily concerned rules around valid consent for combining personal data across several services of the social media company. One of the measures imposed by the authority was a prohibition to collect user and device related data obtained from the use of its affiliated services, as well as from visits to third-party websites or apps without valid consent from users.
Facebook appealed the decision before German courts, with the court of appeal (Oberlandesgericht Düsseldorf) expressing doubts on the legality of the decision of the antitrust regulator, and deciding to suspend its effects as an interim measure until the matter is decided on substance. In return, the German Federal Court of Justice’s antitrust division overturned this interim measure of the court of appeal, and decided that the prohibition ordered by Bundeskartellamt can be enforced while judicial proceedings are ongoing, before sending the case back to Düsseldorf to be decided on substance.
The court in Düsseldorf suspended proceedings and asked the CJEU to clarify a number of essential questions (see also Section 1 above). In this context, can the Bundeskartellamt determine a GDPR breach by the company investigated in antitrust proceedings and order its correction, given that the regulator is not a supervisory authority under the Regulation, let alone the lead one? The referring Court noted that the Irish Data Protection Commissioner – as the lead DPA of the company – was already investigating alleged GDPR breaches relevant for this case.
4. Judicial redress: Can competitors engage in representative actions? And do “worries” and “fears” count as non-material damages?
An interesting question posed by the Austrian Supreme Court of Justice in December 2020 relates to whether persons other than harmed data subjects may initiate judicial proceedings for GDPR breaches against the infringer. The Austrian court wishes to know if Article 80(2) GDPR allows competitors, associations, entities and Chambers to sue, regardless of invoking specific data subjects’ rights infringements and the latter’s mandate, in cases where such bodies are entitled to initiate proceedings under national consumer law.
On such matters, the literature argues that Article 80(2) leaves it up to Member-States to determine whether non-profits with public interest statutory objectives and which are active in the defense of data subjects’ rights may bring own-initiative proceedings in their territory. Thus, it will be particularly interesting to see how the CJEU views the ability of competitors to sue other companies in putative defense of data subjects’ collective interests, notably in the absence of alleged infringements of individuals’ rights.
In May 2021, the Oberster Gerichtshof (Austria) asked important questions to the CJEU related to non-material damages under the GDPR: can courts attribute compensation to data subjects where a GDPR provision has been infringed, but the data subjects have not suffered harm? And, if demonstrating harm is necessary, does Article 82 GDPR require data subjects’ non-material damages to go beyond the mere nuisance or discomfort caused by the infringement?
Just a month later, the Bulgarian Supreme Administrative Court went further and asked whether data subjects’ worries, fears and anxieties caused by a confidentiality breach involving personal data qualify as non-material damages which entitle them to compensation, even where data misuse by third parties has not been established and/or data subjects have not suffered any further harm.
According to its 2020 Annual Report, the average length of proceedings at the CJEU was 15.4 months in the past year. Therefore, it will take a while before the Court clarifies the questions summarized above – and it should be expected that for the very complex ones that raise novel issues, like the interaction between antitrust and data protection law, the proceedings will be longer than average. This overview of questions for preliminary rulings in any case indicates that while there are many GDPR provisions that need clarification, some of the most intricate issues raised by complex personal data processing and how data protection law applies to them have now reached the top court in the EU.
Table 1 — Pending data protection questions sent to the CJEU
Article 5(1)(a) and (e) GDPR; Article 6(1)(f) GDPR; Article 17 GDPR; Article 40 GDPR; Articles 77 and 78 GDPR.
Table 1 — Pending data protection questions sent to the CJEU
Joint Project to Explore Limits of Consent in Asia-Pacific Data Privacy Regimes
The Future of Privacy Forum (FPF), a non-profit organization that serves as a catalyst for privacy leadership and scholarship, has partnered with the Asian Business Law Institute (ABLI), a subsidiary of the Singapore Academy of Law (SAL). With this partnership, FPF Asia Pacific and Singapore’s top legal think tank join forces to offer a unique cooperation platform to support the convergence of data protection regulations and best privacy practices in the region.
It is envisioned that this collaboration will result in research, publications, and events. The agreement will build on the substantial work already done by the two think tanks in this area. FPF recently launched an Asia-Pacific office, which aims to advise stakeholders on emerging privacy laws and frameworks in the region.
The first joint activity of this partnership is an online seminar co-hosted by the Personal Data Protection Commission (PDPC) of Singapore. The event titled “Exploring Trends: From ‘Consent-Centric’ Frameworks to Responsible Data Practices and Privacy Accountability in Asia Pacific” takes place on September 16 from 2-4PM SGT during Singapore’s Personal Data Protection Week. The virtual panel will highlight the limits of the consent-based approach to data protection as it has developed in the region and globally, and the usefulness of developing alternatives, like GDPR-inspired “legitimate interests.” Opening remarks will be made by Yeong Zee Kin, PDPC Deputy Commissioner, with closing remarks by Raymund Liboro, National Commissioner of the Philippines and co-chair of the ASEAN Data Privacy Forum. The panels consist of top data protection and privacy experts in Asian Pacific countries, along with FPF Senior Fellow for India Malavika Raghavan, and FPF Managing Director for Europe Rob Van Eijk.
“In Asia-Pacific as elsewhere, obtaining the user’s consent has long been considered the basis of any regulatory and compliance approach to data protection,” said new Manager of Asia-Pacific, Dr. Clarisse Girot. “Today, this approach has been called into question, and an increasing number of regulators and privacy professionals are promoting accountability over a consent-centric approach to data protection. However, the fragmentation of data protection laws in Asia Pacific is an obstacle to the development of common regional solutions.”
This theme will be one of FPF Asia Pacific’s priorities for the remainder of 2021, in close coordination with the regulators in the region. FPF and ABLI will also release a publication in the coming months that will provide a comparative analysis on the requirements for consent in Asia-Pacific data privacy laws, including recommendations for consistent implementation across the region.
The publication will include insights from highly respected data protection experts across 14 jurisdictions in the Asia-Pacific region. The aim is to propose solutions which are neutral and adapted to the context of each of the jurisdictions covered by this project.
“There are synergies between FPF and ABLI’s work on data privacy,” said Rama Tiwari, Chief Executive at SAL. “This focus on data privacy is especially critical as responses to the COVID-19 situation rely massively on data use and data flows.”
For more information on the Asian Business Law Institute and the Singapore Academy of Law, please visit: https://sal.org.sg/
The Future of Privacy Forum (FPF) and SAE’s Mobility Data Collaborative (MDC) have created a transportation-tailored privacy assessment that provides practical and operational guidance to organizations that share mobility data, such as data from the use of ride-hailing services, e-scooters, or bike-sharing programs. The Mobility Data Sharing Assessment (MDSA) will help organizations assess and reduce privacy risks in their data-sharing processes.
New mobility options are being rapidly adopted in many cities and there is a need to share data so that cities can manage the public right-of-way and for companies to offer or improve services and products.
“These are practical resources to support mobility data sharing between organizations in both the public and private sectors,” said Chelsey Colbert, Policy Counsel at FPF. “The tool is interoperable with leading industry frameworks, and it is technology-neutral so it may be used for any data sharing methods related to ride-hail and micromobility.”
The goal of the MDSA is to enable responsible data sharing that protects individual privacy, respects community interests and equities, and encourages transparency to the public. By equipping organizations with an open-source, interoperable, customizable, and voluntary framework with guidance, the barriers to sharing mobility data will be reduced.
The Mobility Data Sharing Assessment consists of the following components:
A Tool that provides a practical, customizable, and open-source assessment for organizations sharing mobility data.
An Operator’s Manual that provides detailed instructions, guidance, and additional resources to assist organizations as they utilize the tool.
An Infographicthat provides a visual overview of the MDSA process.
“If data from mobility initiatives are going to be used to solve today’s complex mobility challenges, organizations need to be able to conduct thoughtful, in-depth legal and policy reviews,” said Pooja Chaudhari, Head of New Mobility at SAE International and Director of the MDC. “The MDSA can be used by both data providers and recipients to drive innovation in the ever-evolving mobility landscape while ensuring user privacy.”
Learn more about the Mobility Data Collaborative on its website.
Chelsey Colbert is Policy Counsel at the Future of Privacy Forum. Chelsey leads FPF’s portfolio on mobility and location data, which includes connected and automated vehicles, ride-sharing, micro-mobility, drones, delivery robots, and mobility data sharing.
Kelsey Finch, CIPP/US, is Senior Counsel at the Future of Privacy Forum and represents FPF from Seattle, WA. Kelsey leads FPF’s projects on smart cities and communities, data de-identification, ethical data-sharing and research, and other select projects, and serves as an expert and thought leader across the country through speaking engagements, media interviews, and interaction with local, state, and federal regulators and strategic partners.
UPDATE: China’s Car Privacy and Security Regulation is Effective on October 1, 2021
The author thanks Hunter Dorwart for his contribution to this blog.
The purpose of the enacted regulation is to regulate automobile data processing activities, protect the legitimate rights and interests of individuals and organizations, safeguard national security and public interests, and promote the rational development and utilization of automobile data, in accordance with the “Network Security Law of the People’s Republic of China” and the “Data Security Law of the People’s Republic of China.”
The enacted regulation for car privacy and security is not the biggest news to come from China. On August 20, 2021, the National People’s Congress (NPC) of China adopted the first Chinese comprehensive data protection law, the Personal Information Protection Law (PIPL), less than a year after the first draft of the law was published. The PIPL will go into effect on November 1, 2021. For more about PIPL, see my colleague’s recent blog post.
The enacted regulation should be read in conjunction with other laws, regulations, and standards in China’s emerging data protection regime. In theory, laws passed by the National People’s Congress (NPC), such as the Personal Information Protection Law (PIPL), take priority over administrative regulations, such as the one detailed in this post and other regulations passed within China’s larger regulatory bureaucracy such as the recent market regulations for the ride-hailing industry. The CAC is technically not a government agency but rather a super-ministerial body directly under the State Council. It drafts regulations with the input and agreement of other agencies but operates largely independently of them.
This post is an update to a post from May 18, where I summarized the draft regulation (“Several Provisions on the Management of Automobile Data Security”). This post compares the draft regulation with the enacted regulation, highlights notable changes between the two regulations, and concludes with a summary of the notable differences.
Updated scope of covered entities: “Automobile data processors”
Automobile data processors are organizations that carry out automobile data processing activities, including automobile manufacturers, parts and software suppliers, dealers, maintenance organizations, and ride-hailing and -sharing companies (出行服务企业).
In contrast, the draft regulation applies to “operators”, which are defined as “automobile design, manufacturing, and service enterprises or institutions, including automobile manufacturers, component and software providers, dealers, maintenance organizations, online car-hailing companies, insurance companies, etc.”
Covered data: Distinction among “personal information,” “important data,” and “sensitive personal information,” plus a new data type “automobile data”
The enacted regulation adds a fourth type of data: “automobile data.” Automobile data covers personal information data and important data involved in the process of automobile design, production, sales, use, operation, and maintenance.
Automobile data processing includes the collection, storage, use, processing, transmission, provision, and disclosure of automobile data.
Personal information refers to information related to the identified or identifiable vehicle owner, driver, passenger, and people outside the vehicle that have been recorded electronically or by other means and does not include anonymized information. This mirrors China’s Personal Information Protection Law’s definition of personal information but narrows the scope to information specific to the use of vehicles.
In contrast, the definition of “personal information” in the draft regulation includes “the personal information of car owners, drivers, passengers, pedestrians, etc., as well as various information that can infer personal identity and describe personal behavior.”
Sensitive personal information refers to personal information that, once leaked or illegally used, may cause discrimination against car owners, drivers, passengers, and people outside the car, or serious harm to personal and property safety, including vehicle location audio, video, images, and biometric data.
The draft regulation’s definition of “sensitive personal information” (found in Article 8) includes “data that can be used to determine illegal driving.” The enacted regulation does not have this and instead includes “personal information that once leaked or illegally used, may cause discrimination against car owners, drivers, passengers, and people outside the car, or serious harm to personal and property safety.”
Important data refers to data that may endanger national security, public interests, or the legitimate rights and interests of individuals or organizations once it has been tampered with, destroyed, leaked, or illegally obtained or used, including:
(1) Geographical information, personnel flow, vehicle flow, and other data in important sensitive areas such as military management zones, national defense science and industry units, and party and government agencies at or above the county level;
(2) Data reflecting economic operation conditions such as vehicle flow and logistics;
(3) Operating data of the car charging network;
(4) Video and image data outside the car including face information, license plate information, etc.;
(5) Personal information involving more than 100,000 personal information subjects;
(6) Other data that may endanger national security, public interests, or the legitimate rights and interests of individuals and organizations as determined by the State Cyberspace Administration and the State Council’s development and reform, industry and information technology, public security, transportation, and other relevant departments.
The definitions of “important data” in the draft and enacted regulation are similar, and both regulations contain specific provisions for automobile data processors processing and sharing this type of data (see below).
Obligations based on the Fair Information Practice Principles (or the Personal Information Protection Principles)
Article 4 requires that the processing of automobile data by auto processors be legal, proper, specific, and clear. Automobile data processing must be directly related to the design, production, sale, use, operation, and maintenance of the vehicle. This Article is similar to language in the draft regulation’s Article 4; however, the enacted regulation has broader language.
Article 5 in both the draft regulation and the enacted regulation is about security and data protection. Automobile data processors must implement network security grade protection, strengthen automobile data protection, and perform data security obligations in accordance with the law.
Article 6 in both the draft regulation and the enacted regulation lists several privacy best practices that automobile data processors are encouraged to follow when processing automobile data (note that this Article applies to “automobile data”). The enacted regulation has four, while the draft had five. In the enacted regulation, the principle of data retention has been moved from Article 6 to Article 7.
The principles in Article 6 are now:
Process automobile data inside the vehicle unless it is necessary to send it outside the vehicle.
Non-collection by default. Unless the driver chooses otherwise, the default is to not collect automobile data.
A principle of precision is determined by the capacity of automobile data processors to meet accuracy standards for processing data regarding the range, the coverage, and resolution of cameras, radars, etc. (“Principle of accuracy range application.”)
Desensitization treatment, or anonymization and de-identification treatments whenever possible.
Article 6(3) is notable because it appears to introduce a technical standard, which may be considered outside of the scope of a privacy (or data protection) regulation. However, the press release, which contains answers to reporters’ questions, notes that “[d]uring the formulation of the “Regulations,” both safety and development were emphasized…” and “driving safety” is mentioned throughout the enacted regulations. There may be more information provided through technical or industry standards about what exactly this means for manufacturers.
Article 7 in both the draft regulation and the enacted regulation applies to “personal information” (not the broader “automobile data”) and requires automobile data processors to notify individuals through manuals, on-board display panels, voice, and other vehicle-related applications.
The draft regulation lists four things the individual must be made aware of, while the enacted regulation lists seven. As noted above, retention has been moved to Article 7. The complete list of information that must be communicated to individuals is:
(1) The types of personal information processed, including vehicle location, driving habits, audio, video, images, and biometric features, etc.;
(2) The specific circumstances under which various types of personal information are collected and the ways and means to stop the collection;
(3) Purposes, uses, and methods of processing various types of personal information;
(4) Personal information storage location and retention period, or rules for determining the storage location and retention period;
(5) Ways and means of consulting and copying their personal information, deleting the information collected inside of the vehicle, and requesting to delete the personal information that has been provided outside the vehicle;
(6) The name and contact information of the contact person for exercising user rights;
(7) Other matters that should be notified as required by laws and administrative regulations.
While it is not clear from the text, it appears that Article 7 uses “individuals” in a slightly more narrow way than in the definition of “automobile data,” which includes people outside of the vehicle. In Article 7, it appears that “individuals” does not include people outside of the vehicle. This could be because there is a practical challenge of effectively communicating all of the above information to pedestrians, whose interactions with the vehicle may be fleeting. The provisions in Article 7 also appear to focus on types of data collected inside of a vehicle.
Article 8 and 9 in the draft and enacted regulation have been switched.
Article 8 of the enacted regulation (draft regulation Article 9) is about consent to process personal information. When processing personal information, automobile data processors must obtain consent or comply with other requirements as stipulated by laws and administrative regulations.
Notably, the new Article 8 mentions safety. This sentence states that (paraphrased and translated):
Due to the need to ensure the safety of driving, automobile data processors that cannot obtain personal consent to collect personal information from outside the vehicle and that share personal information outside of the vehicle should anonymize the data, including deleting the images or videos that can identify natural persons, partial facial information, or contour processing (对画面中的人脸信息等进行局部轮廓化处理等), which appears to mean using the features of someone’s face to create larger outlines of the person).
The draft regulation’s consent provision (Article 9) did not reference vehicle safety and instead recognized that it might be difficult in practice to obtain consent.
Article 9 of the enacted regulation (draft regulation Article 8) lists requirements for processing “sensitive personal information” and notes that automobile data processors must also meet requirements under other applicable laws, administrative regulations, and mandatory national standards.
Again, one notable difference between the two regulations is that in this particular Article about processing “sensitive personal information”, the draft regulation mentions “driving safety” once, while the enacted regulation mentions “driving safety” thrice. Thus, this further illustrates a greater focus on the importance of balancing vehicle and driving safety with privacy and security.
The five requirements for processing “sensitive personal information” in Article 9 are:
(1) Having the purpose of directly serving individuals, including enhancing driving safety, intelligent driving, navigation, etc.;
(2) Notifying the necessity and impact on individuals through obvious means such as user manuals, on-board display panels, voice, and car use-related applications;
(3) Individual consent should be obtained, and the individual can independently set the time limit for consent;
(4) Under the premise of ensuring the safety of driving, prompt the collection of data in an appropriate manner to provide convenience for individuals to terminate the collection;
(5) If an individual requests deletion, the automobile data processor shall delete it within ten working days.
There are a few differences between the draft and enacted regulations that are worth noting here.
The requirement in 9(1) is almost identical, except that the enacted regulation does not explicitly include the purpose of “entertainment” and instead of “assisting driving,” uses “intelligent driving”. This requirement includes “etc.”, so this is presumably not a closed list, and “entertainment” may still be read in.
The requirement for notice in 9(2) is enhanced in the enacted regulation. The draft regulation (in 8(3)) requires that the individual be informed that sensitive personal information is collected. The enacted regulation requires that individuals be notified of the necessity and the impact on them.
The draft regulation requires that individuals be able to terminate the collection of sensitive personal data at any time (8(4)). Being able to stop the collection of this data at any time could have raised safety concerns if, for example, the driver terminated the collection of this data while the car is in operation without understanding how that data was being used to operate the car. The enacted regulation has updated language, which may address this concern. Individual consent is required, and the individual can set the time limit for consent (9(3)).
Related to 9(3), the enacted regulation states in 9(4) that “Under the premise of ensuring the safety of driving, prompt the collection status in an appropriate manner to provide convenience for individuals to terminate the collection.”
The enacted regulation does not include Article 8(5), “Allow vehicle owners to conveniently view and structured inquiries about the collected sensitive personal information.”
The draft regulation requires that sensitive personal data be deleted within two weeks upon request by the “driver.” The enacted regulation requires automobile data processors to delete that data within ten working days if requested by an “individual.”
The enacted regulation also includes a purpose and necessity requirement for the collection of a particular type of sensitive personal information: biometric data (such as fingerprints, voiceprints, human faces, heart rhythms). This appears to replace the draft regulation’s Article 10, which focuses on biometric data. Note that the press release states that “[r]egarding personal biometric information, it is clear that the car data processor has the purpose of enhancing driving safety and is necessary to collect it,” underscoring the sensitivity of biometric data and the high bar required to process this type of data.
Article 10 of the enacted regulation adds a new requirement for automobile data processors who process “important data.” In this situation, automobile data processors must conduct risk assessments in accordance with regulations and submit risk assessment reports to the provincial, autonomous region, and municipal network information departments and relevant departments.
The risk assessment report shall include the type, quantity, scope, storage location and period, use of the important data processed, the status of data processing activities and whether it is provided to a third party, the data security risks faced, and countermeasures, etc.
This appears to replace Article 11 of the draft regulation, which requires operators to report to the provincial network information department and relevant departments similar information about important data, but does not use the term “risk assessment.”
“Important Data”
Both the draft and enacted regulations contain several Articles pertaining to “important data”. (Articles 11-17 in the draft regulation and Articles 10-14 in the enacted regulation).
As noted above, automobile data processors are required to conduct a risk assessment when processing important data (Article 10).
Article 11 requires important data to be stored in China unless it is necessary to provide it overseas for business purposes. In this situation, there must be a security assessment (“exit safety assessment”) conducted by the State Cyberspace Administration of China with the relevant departments of the State Council.
Automobile data processors should not exceed the purpose, scope, method, type, and scale of important data specified in the exit safety assessment when this data is shared overseas (Article 12). The national cybersecurity and informatization department, in conjunction with relevant departments of the State Council, will verify the matters specified in the exit safety assessment by means of random inspections.
Article 13 requires automobile data processors who process important data to report the following automobile data security management information to the provincial, autonomous region, and municipal network information department and relevant departments before December 15 of each year:
(1) The name and contact information of the person in charge of automobile data security management and the contact person for processing user rights;
(2) The type, scale, purpose, and necessity of processing automobile data;
(3) Safety protection and management measures for automobile data, including storage location, period, etc.;
(4) Providing automobile data to domestic third parties;
(5) Car data security incidents and handling conditions;
(6) User complaints and handling of automobile data;
(7) Other automobile data security management conditions specified by the State Cyberspace Administration in conjunction with the State Council’s industry and information technology, public security, transportation, and other relevant departments.
In addition to the above requirements in Article 13, if automobile data processors share important data overseas there are additional reporting requirements found in Article 14. Articles 13 and 14 replace Articles 17 and 18 in the draft regulation.
Article 15 states that anyone participating in the exit safety assessment must not disclose the trade secrets or other confidential information learned during the assessment or use any information for purposes other than the assessment.
Article 16 appears to be an affirmation that China supports intelligent and connected vehicle operations and will cooperate with automobile data processors to strengthen and secure the network.
Article 17 requires auto data processors to establish appropriate complaints and reporting portals to handle user complaints.
Article 18 replaced article 20 in the draft regulation but has similar language regarding violations and penalties.
In summary, the processing and sharing overseas of “important data” may trigger the requirement of five separate assessments, reports, or inspections.
Risk assessment: All automobile data processors who process important data should complete a risk assessment. Risk assessments are submitted to the provincial, autonomous region, and municipal network information departments and relevant departments (Article 10).
Exit security assessment: If an automobile data processor finds it is necessary to share important data outside of China for business purposes, the automobile data processor must pass a security assessment organized by the national network information department in conjunction with the relevant departments of the State Council (Articles 11 and 12).
Random inspection: The State Cyberspace Administration and relevant departments of the State Council will conduct random inspections to verify the information automobile data processors record in their exit security assessment (Article 12).
Annual report: All auto data processors that process important data must file an annual automobile data security report (Article 13).
Annual supplementary report: If an automobile data processor finds it is necessary to share important data outside of China for business purposes, the automobile data processor must supplement the annual report referenced in Article 13 with additional information (Article 14).
Summary of the Main Differences between the Draft Regulation and the Enacted Regulation
The enacted regulation has a new defined term: “automobile data.” This term appears to be a shorthand to refer to both “personal information” and “important data”. “Sensitive personal information” is a subset of “personal information.”
The definition of “personal information” has been updated.
“Sensitive personal information” is explicitly defined and somewhat clarified. The draft regulation’s definition of “sensitive personal information” includes “data that can be used to determine illegal driving”. The enacted regulation does not include this and instead refers to “personal information that once leaked or illegally used, may cause discrimination against car owners, drivers, passengers, and people outside the car, or serious harm to personal and property safety”.
There is a new risk assessment requirement for processing “important data.”
The draft regulation applies to “operators,” and the enacted regulation applies to “automobile data processors”. The definitions of both of these terms are different.
The principle of data retention has been moved from Article 6 (privacy best practices) to Article 7 (requirements to notify individuals).
More emphasis is placed on “driving safety” in the enacted regulation. For example, see Articles 8 and 9 and the press release. This further illustrates a greater focus on the importance of balancing vehicle and driving safety with privacy and security. This balance or, at times, tension will likely appear in both vehicle and automated vehicle regulations globally.
The enacted regulation has an updated deletion request timeline for sensitive personal data.
The enacted regulation has additional requirements and considerations for automobile data processors processing or sharing “important data” overseas.
Conclusion
Some challenges and considerations raised by the enacted regulation are 1) the coming into force date; 2) the introduction of what appears to be technical standards without further detail (e.g., Article 6(3)); and 3) that it is not always clear who exactly “individuals” refers to (e.g., Article 7). The coming into force date is October 1, 2021. Many of the requirements and best practices throughout the regulation likely require software, hardware, and design changes, and the tight deadline could prove challenging for automakers, where the average design and manufacturing span for a vehicle can be two-three years.
The enacted regulation highlights the complexity of the mobility ecosystem in two ways. First is the complexity of the data flows, evidenced by the regulation defining three types of data commonly processed by automobile data processors and introducing a new umbrella data term “automobile data.” Second is the complexity of parties involved, evidenced by the broad definition of “personal information,” which includes the vehicle owner, driver, passengers, and people outside of the vehicle. Similarly, “automobile data processors” is also defined fairly broadly and includes vehicle manufacturers, hardware and software suppliers, dealers, repair shops, and ride-hail companies.
Also notable are the references to and emphasis on driving safety. As vehicles become more connected and automated, safety standards will increasingly influence data processing and thus privacy and data protection regulations, which will in turn impact vehicle design, operations, and safety. This circle of influence underscores the importance of privacy and data protection experts working closely with product designers and computer scientists. Privacy and data protection are slowly but surely moving from the risk and compliance office and into the product and engineering offices. As we travel along the road of car privacy and security regulations, this trend is sure to speed up.
China’s New Comprehensive Data Protection Law: Context, Stated Objectives, Key Provisions
The National People’s Congress (NPC) of China adopted on August 20, 2021 the first Chinese comprehensive data protection law, the Personal Information Protection Law (PIPL), less than a year after the first draft of the law was published. The NPC thus concluded its legislative process that saw two additional markups of the law since October of last year. The PIPL will go into effect on November 1, 2021, but many companies within China are already coordinating with relevant enforcement agencies to comply. The adoption of the PIPL occurs in the wake of enhanced scrutiny over the tech sector by the Chinese government, and within a year from the entering into force of the new Civil Code which includes specific provisions for the protection of personal information.
The PIPL represents one pillar of China’s emerging data protection architecture that includes a myriad of other laws, industry-specific regulations, and standards. For instance, the recently enacted Data Security Law (DSL) sets forth a comprehensive list of requirements regarding the security and transferability of other types of data. It also establishes a “marketplace for data” to enable data exchange and digitalization. Additionally, the PIPL explicitly references China’s Constitution to provide a more firm legal basis for the implementation of its data protection goals (Art. 1). As such, the PIPL should not be viewed in isolation but rather examined in relation to these other regulatory tools that serve complimentary, albeit different purposes.
The PIPL will mainly serve as China’s comprehensive data protection law, following in this respect the European approach which clearly distinguishes the protection of privacy from the protection of individuals with regard to the processing of their personal information (“data protection”). Its officially declared aims are thus:
to protect the rights and interests of individuals (为了保护个人信息权益),
to regulate personal information processing activities (规范个人信息处理活动),
to safeguard the lawful and “orderly flow” of data (保障个人信息依法有序自由流动),
to facilitate reasonable use of personal information (促进个人信息合理利用) (Art. 1).
Throughout the legislative process, experts and privacy professionals have contributed to the work of the legislator, based among other things on their experience resulting from the implementation of EU’s General Data Protection Regulation (GDPR), which served as a reference in this exercise as in the drafting of previous renditions of data protection regulation such as the Personal Information Specification. It should be noted that it is not unusual for Chinese lawmakers to draw inspiration from texts and codes from European continental law traditions, China itself being a civil law jurisdiction.
The PIPL however serves several other objectives, which distinguishes it from the majority of data protection laws adopted to date around the world. Like its previous preparatory versions, the law has a distinct ‘national security’ flavor, particularly around its provisions on localization and cross-border transfers.
The law also incorporates provisions that affirm China’s intention to defend its digital sovereignty: overseas entities which infringe on the rights of Chinese citizens or jeopardize the national security or public interests of China will be placed on a blacklist and any transfers of personal information of Chinese citizens to these entities will be restricted or even barred. China will also reciprocate against countries or regions that take “discriminatory, prohibitive or restrictive measures against China in respect of the protection of personal information” (Art. 43).
Last but not least, the PIPL clearly states China’s ambition to take a full part in international data protection discussions and thus assert its influence commensurate with the size of its economy and its growing technological capabilities. In particular, PIPL states China’s aims to actively contribute to the setting of global data protection standards ‘with other countries, regions, and international organizations’ (Art. 12). Related provisions of the PIPL echoe the stated ambitions of influencing international negotiations which relate directly or indirectly to international data transfers. The relevant provisions should therefore be read in the broader perspective of the Belt & Road Initiative (BRI) and the provisions relating to data transfers included in the Regional Comprehensive Economic Partnership (RCEP), conceived as a “regional backup” of negotiations on WTO e-trade rules, or so-called JSI negotiations.
Overview of the PIPL
At a broader level, like most data protection laws modelled after the GDPR and other modern data protection laws, the PIPL sets forth a range of obligations, administrative guidelines, and enforcement mechanisms with respect to the processing of personal information. For instance it applies to very broadly defined “personal information” (PI) – which includes the “identifiable” element from the GDPR, includes lawful grounds for processing after the GDPR model, but with “legitimate interests” notably missing, and applies to “handling” of PI which includes “collection” of PI, meaning that a lawful ground is needed even before touching the data.
Additionally, the PIPL has rules for “handlers”, “joint handling” and “entrusted parties” with handling on behalf of the handlers (controllers, joint controllership, processors), including agreements to be put in place similarly to Art. 26 and Art. 28 agreements in the GDPR. It likewise applies in the public sector, as well as in the private sector, and has data localization requirements with regard to PI processed by state organs, critical infrastructure operators, and other handlers reaching a specific volume of PI processed.
The law regulates personal information transfers outside of China by imposing obligations on handlers before transferring data abroad such as complying with a security assessment by relevant authorities. It also mandates risk assessments (similar to a Data Protection Impact Assessment) for specific processing including automated decision-making and handling that could have “a major influence on individuals.” Data handlers must also appoint Data Protection Officers (DPOs) in specific situations, depending on the volume of PI processed, and conduct regular compliance training.
Individuals are granted an extensive number of “rights in personal information handling activities”. The PIPL provides for individual rights very similar to GDPR’s “rights of the data subject”, such as erasure and access, and it specifically includes a right to obtain explanation and a right to data portability, the latter being introduced late in the third version of the draft law.
Finally, the PIPL has a complex system of enforcement, including fines (that can go up to 5% of a company’s turnover) and administrative action (including orders to stop processing, or confiscation of unlawfully obtained profit), individual rights to obtain compensation, and civil public interest litigation cases through a public prosecutor.
The PIPL is divided into eight substantive chapters. Below we summarize the key aspects of the law and provide preliminary analysis.
1. Covered Data: Personal Information, Sensitive Information
The law applies to “handling” of “personal information”, both in the private and public sectors. Unlike the GDPR and its Article 4, the PIPL contains no general provision defining key terms of the law. Rather, notable definitions are scattered throughout the text and sometimes included directly in a more specific provision. Most of the definitions contained in the law are similar to, or using some identical wording to that of the GDPR, with notable variations.
Broad Definition of Personal Information (个人信息)
Personal information (PI) refers to “all kinds of electronic or otherwise recorded information related to an identified or identifiable natural person” (Art. 4). This definition largely mirrors the one set forth under the Cybersecurity Law and Chinese Civil Code, which define personal information as “the various types of electronic or otherwise recorded information that can be used separately or in combination with other information to identify the natural person.” Relatedly, it resembles the broad definition of “personal data” in the GDPR as “any information relating to an identified or identifiable natural person.”
Open List of Sensitive Information (敏感个人信息).
The law further specifies that sensitive information means “personal information that once leaked or illegally used may cause discrimination against individuals or grave harm to personal or property security, including information on race, ethnicity, religious beliefs, individual biometric features, medical health, financial accounts, individual location tracking, etc.” (Art. 28). Information handlers may only process sensitive personal information for specific purposes and when sufficiently necessary (Art. 28). Handlers shall further obtain specific consent if they rely on individual consent for processing (Art. 29).
Notably, the definition of sensitive information diverges from the GDPR’s “special categories” of personal data, which is a closed list of specific types of personal data (see Article 9). The PIPL has an open list of sensitive data, centering its definition around the notion of harm and the data’s potential discriminatory impact on individuals. Unlike the GDPR, the PIPL contains no specific provision regarding the processing of PI related to criminal convictions and offences. In contrast, financial information and location data are included in the scope of sensitive PI, to the effect of subjecting their handling to obtaining the individual’s specific consent. The extension of the scope of sensitive information to cover financial information has been noted in other jurisdictions like India.
Finally, the PIPL treats biometric data as sensitive PI.This qualification resonates with the specific provisions in the law on facial recognition (see below).
De-identification and Anonymization are Defined Separately
“De-identification” and “anonymization” are defined in the very last substantive provision of the PIPL (art. 73), with anonymized PI being specifically excluded from the material scope of the law (Art. 4).
De-identification (去标识化) is defined similarly to the GDPR’s pseudonymization and it refers to the “process of personal information undergoing handling to ensure it is impossible to identify specific natural persons without support of additional information” (Art. 73). The PIPL does not define de-identification for any other purpose than to list de-identification among the technical security measures which PI handlers may use to comply with security obligations (Art 51).
Anonymization (匿名化) refers to the “act of personal information handling to make it impossible to identify specific natural persons and impossible to restore”; PI which has been anonymized is specifically excluded from the scope of the law; in addition, third parties are prohibited to try and re-identify anonymized information they receive from handlers.
Personal Information Handling (个人信息的处理) Defined Broadly to Cover the Entire Lifecycle of PI
PI handling includes “the collection, storage, use, processing, transmission, provision, publishing, and other such activities” of personal information (Art. 4). This resembles the definition of processing under the GDPR and it means that the rules proposed in the law apply to the collection of PI as well as to the use of PI. Since the law includes lawful grounds for handling PI (see below), this means that such grounds must be in place before a handler collects the data.
2. Covered Actors, both in the Public and Private sectors
Information Handler or “Controller” (个人信息处理者); “Entrusted parties”/Processors;
Conventionally the law has rules on controllers, joint controllers and processors. Parties or individuals become personal information handlers when they “independently determine the purposes and means for handling of PI.” (Art. 73). PI handlers appear to function in a similar manner under the Chinese draft law as data controllers under the GDPR. Note that the law nor any other legislation in China specifically uses the term “controller” (控制者).
The law also provides rules on joint controllership, “where two or more PI handlers jointly decide on a PI handling purpose and handling method”. Joint handlers have to “agree” on the rights and obligations of each, and the agreement should not affect the possibility for an individual to exercise their rights against any of them; they are also jointly liable for breaches (Art. 21).
Handlers can entrust handling of PI to third parties under very similar conditions to the controller-processor relationship in the GDPR. They have to conclude an agreement, which has to refer to the purpose of the entrusted handling, the handling method, categories of PI, the rights and duties of both sides etc., including ways to “conduct supervision of the PI handling activities of the entrusted party” (Art. 22); this resembles the audit clause in Art. 28 GDPR agreements.
Finally, If the data processing agreement with a third party becomes ineffective or invalid or is otherwise terminated, the third party must not store the personal information and either return it to the data handler or delete it (Art. 21).
Public and Private Sectors Covered
Similarly to GDPR’s household exemption, the PIPL does not apply to processing by natural persons for their personal or family affairs (Art. 72). The PIPL further applies to processing activities in both the private and public sectors.
No private organization is exempt from the scope of the law; on the other hand, certain companies (“those who provide important Internet platform services, a large number of users, and a complex type of personal information processor”) are subject to reinforced obligations (see below).
In the public sector, all “state organs” (i.e. public authorities and agencies, at central, provincial or municipal levels, including the courts and lawmaking bodies) must abide by a specific set of obligations with regard to the handling of PI in the context of the performance of their statutory duties (Art. 34). These rules apply alongside the wide array of information management rules which apply to the Chinese administration.
The same obligations apply to organisations which handle PI on behalf of state agencies based on specific laws or regulations (Art. 37). This includes notifying individuals and obtaining their consent when handling PI (for instance to share PI between administrations), unless notification will impede the performance of their statutory obligations, or specific statutory rules impose secrecy (Art. 18 & 35). State agencies must store the PI they process in China and can only transfer such data overseas “if it is really necessary to provide the PI overseas” and after undergoing a security risk assessment that “may require support and assistance from relevant departments” (Art. 36).
State agencies that fail to comply with the law will be subject to oversight from a superior authority and will have to make corrections in their processing activities (Art. 68). The individuals directly responsible or in charge of the agency’s decisions that lead to non-compliance face personal liability for their actions, including termination, suspension, and fines (see below).
3. Territorial Scope, with Extraterritorial Long Arm
The PIPL principally applies to organizations and individuals’ handling PI of natural persons within the jurisdiction of China. This applies to any organization or person physically within the borders of China.
Article 3 of the law extends the territorial scope of the law to processing activities by handlers established outside of China, similarly to the GDPR, if one of the following circumstances is present:
Where the purpose is to provide products or services to natural persons inside the border;
Where conducting analysis or assessment of activities of natural persons inside the borders;
Other circumstances provided in laws or administrative regulations.
This third paragraph has no direct equivalent in GDPR and leaves a margin of discretion to the public authorities to further extend the long-arm jurisdiction of the law in cross-border scenarios.
The law requires handlers outside of China that process personal information of covered persons to establish a dedicated entity or appoint a representative within China to be responsible for matters related to their information processing (Art. 52). Such entities must provide the name and contact method of the representative to the relevant departments responsible for implementing the law.
4. Lawful Grounds and Personal Information Protection Principles
PI handlers must have a valid legal basis to handle PI from one of the following circumstances (Art. 13):
Obtaining the individuals’ consent;
Where necessary to conclude or perform a contract to which the individual is an interested party or where necessary to comply with relevant labor regulations or the execution of a collective contract to implement necessary human resources supervision (e.g., employee data).
Where necessary to fulfill statutory duties and responsibilities or statutory obligations;
Where necessary to respond to sudden public health incidents or protect natural person’ lives and health, or the security of their property, under emergency conditions;
Handling personal information within a reasonable scope to implement news reporting, public opinion supervision, and other such activities for the public interest;
Handling personal information publicly disclosed by an individual or other legally disclosed information within a reasonable scope, unless the individual expressly refuses or if there is a major influence on individual rights and interests.
Other circumstances provided in laws and administrative regulations.
Like in GDPR, the seven legal grounds to process PI in Art. 13 are provided on an equal basis, meaning that there is no preferred order in which they should be relied on. This provision is significant because it distinguishes the PIPL from the data protection provisions previously applicable to the collection and processing of PI in China, including in the Cybersecurity Law and Civil Code, which are mainly centered on consent. This evolution will be welcomed by practitioners and legal scholars alike, who in China as elsewhere criticized overly relying on consent as an insufficient and artificial protection of the individual. The consent-centric framework was also criticized as being too rigid, with companies long advocating for additional legal bases, such as performance of a contract or legitimate interests, e.g. for anti-fraud purposes.
However, this list does not include the broad concept of the data handler’s “legitimate interests” as it has existed for more than twenty years in the EU data protection framework, and now in a significant number of data protection laws in Asia Pacific and other regions. It is nonetheless possible that further administrative regulations will add a similar ground for processing. The insertion in the final version of PIPL of specific provisions relating to the processing of employee data, in order to exempt the processing of their PI from the collection of their consent (Art. 13(2)), would be indicative of this development in the mind of the legislator. This provision built on precedents found in local regulations like the recent Shenzhen data regulation, which expand exceptions to consent for employers to process employees’ data for certain purposes.
Focus on Consent
Despite the evolution seen in relation to the addition of several other lawful grounds for handling data, consent is present throughout the law outside the legal bases provision. For example, the law includes a prohibition for handlers to disclose the PI they are processing, unless they obtain specific consent (Art. 25). When processing publicly available PI, handlers should only process them in a way that reasonably conforms with the purpose for which they were published and if they are processed for different purposes, the individual needs to be notified and asked for consent. Consent also plays a role in further uses of facial recognition installed in public venues, which is allowed as a matter of principle only for the purpose of ensuring public security (Art. 26).
The conditions for validity of consent are that it must be informed (given under a “precondition of knowledge” about the processing), and it must be given “in a voluntary and explicit statement of wishes” (Art. 14). Laws or administrative regulations may require “specific consent” or “written consent” for specific processing of PI (Art. 14).
Similar to the GDPR, individuals have the right to withdraw consent (Art. 15). Inspired by the “freely given” validity condition under the GDPR, the PIPL also provides that handlers may not refuse to provide products or services on the basis that an individual does not consent to the processing of PI or withdraws their consent, except in those situations where the PI is “necessary” for the provision of products or services (Art. 16). Handlers must provide a convenient way for individuals to withdraw consent and the withdrawal will not affect any processing activity that took place before consent was revoked (Art. 15)
Stringent Rules for Children PI
Handlers that process PI of children younger than 14 must obtain consent from parents (Art. 31). This marks a departure from earlier drafts that mandated obtaining consent only when the processor knew or should have known that the data subject was 14 or under. This provision was notably introduced in the 2018 PI Security Specification. The inclusion of a more strict standard also reflects the fact that such information now constitutes sensitive information under the PIPL and thus handlers must comply with additional requirements (see above). Lawmakers in China have recently concentrated on the online protection of minors: they have passed a revised Law on Minors with more stringent restrictions for companies that offer online services to minors and even taken recent enforcement actions in this space.
Personal Information ProtectionPrinciples
The PIPL recognises key data protection principles very similar to the Fair Information Protection Principles (FIPPs) and other data protection principles included in the GDPR:
There is a principle of “sincerity” or “good faith”– which most likely is akin to the principle of fairness. The law further recognizes principles of lawfulness and necessity (Art. 5).
There are rules on purpose limitation (Art. 6), including a requirement to process PI with a clear, reasonable and directly related purpose. There is also a minimization provision (Art. 6).
The law references openness and transparency as overarching goals of personal information processing (Art. 7).
The law also stipulates accuracy and accountability as guiding indicators (Art. 8-9).
There is a provision that references storage limitation (Art. 19). Retention periods shall be the shortest period necessary to realize the purpose of the personal information handling.
5. Automated Decision-Making (自动化决策) and Facial Recognition
The PIPL contains specific provisions governing the use of Automated Decision-Making (ADM). Under the law, ADM refers to “activities that use personal information to automatically analyze, assess, and decide via computer programs, individual behaviors and habits, interests and hobbies, or situations relating to finance, health, or credit status.” (Art 73(2)).
The law mandates specific processing obligations:
When conducting ADM with PI, handlers must guarantee transparency, fairness and reasonability of the result (Art. 24). If an individual believes ADM creates a major influence on their rights and interests, they may demand an explanation and refuse the sole use of such automated decision-making.
Entities that use ADM to make targeted marketing offerings should simultaneously provide an option for individuals to receive information not based on personal characteristics or offer a convenient method of refusal (Art. 24).
No unreasonable differences in transaction price or treatment may be imposed on individuals through ADM (Art. 24).
Handlers must conduct a DPIA before offering such services through automated means (Art. 55).
Facial Recognition Rules for Public Areas
In public areas, the installation of image collection or personal identity recognition equipment must be used to safeguard public security and observe relevant State regulations (Art. 27). Safeguarding public security is the only legally recognized purpose for such activities and individuals must be notified of the information collection process. Information gathered in this way cannot be published or disclosed, except where individuals’ specific consent is obtained, or laws and regulations provide otherwise.
The provisions mirror a growing public awareness in China of the need to regulate the private use of facial recognition technology in public areas more strictly. For instance, in a famous case that received widespread attention both within and outside of China, a lawyer successfully sued a zoo for using the technology in order to monitor and admit guests. Although the plaintiff won on a theory of breach of contract, he was unable to change the zoo’s policy but plans to appeal the case to the highest court.
In addition, several cities have passed regulations limiting or banning the use of these technologies including Tianjin, Nanjing, and Xuzhou.
6. Rights for Individuals Over Their PI (Access, Erasure; Specific Right to Explanation, Portability)
Under the law, individuals should receive explicit notice “before handling” occurs and be provided with relevant information including the identity and contact method of the personal information handler, any subsequent third party handlers, the purpose and methods of PI handling, the categories of handled PI, the retention period, and procedures for individuals to exercise their individual rights under the law (Art. 17). Notably, Art. 18 specifies that handlers do not need to notify individuals if a state secrecy law is in place or under “emergency” circumstances, which could include threats to public security, health or safety.
The law stipulates that personal information handlers establish mechanisms to accept and process applications from individuals to exercise their rights (Art. 50). If the information handlers reject the request, they must explain the reason for doing so. The law recognizes the following rights:
Right to know, decide, refuse, and limit the handling of their personal information by others, unless laws or regulations stipulate otherwise (Art. 44).
Right to access and copy their personal information in a timely manner, except when the laws and regulations require confidentiality (Art. 45).
Right to correct or complete inaccurate personal information in a timely manner (Art. 46).
Right to deletion of (i) the agreed retention period has expired, or the handling purpose has been achieved; (ii) personal information handlers cease the provision of services; (iii) the individual rescinds consent; (iv) the information is handled in violation of laws, regulations or agreements (Art. 47).
Right to request handlers explain their handling rules (Art. 48).
Right to data portability to a designated handler (Art. 45, para. 3). Specific conditions for porting data will be determined by state cybersecurity and information departments.
These rights extend beyond an individual’s death and can be exercised by close relatives of the decedent, unless otherwise arranged by the decedent during their lifetime (Art. 49).
7. Obligations of Data Handlers related to Accountability: “DPIAs”, “DPOs”, Data Breach Notification, Training Obligations; Large Scale Distinctions
Chapter V provides for a number of obligations of PI handlers. Art. 52 stipulates that handlers that handle information reaching quantities outlined by the competent authorities must appoint persons responsible for PI protection and publish the name and contact details of such persons. When the handler discovers a personal information leak, they must immediately adopt remedial measures and notify competent authorities (Art. 57). Where adopted measures can effectively avoid data breach harms, information handlers do not have to notify individuals.
Handlers must conduct a personal information protection influence assessment to determine whether the handling purposes and methods are lawful, the influence such processing has on individuals, and whether the adopted security measures are adequate to ensure compliance (Art. 55). The assessment should take into account the handling of sensitive personal information, automated decision making, subsequent processing done by third parties, cross-border transfers, and other processing activities that have a significant impact on personal rights and interests.
Data handlers must also adopt corresponding technical security measures such as encryption, de-identification, etc.; determine operational limits for information handling, regularly conducting security education and training for employees, and regularly conducting compliance audits with specialized entities (Art. 54). Additionally, they must formulate and organize the implementation of incident response plans.
One of the new provisions of the law, introduced just before its adoption, targets large online platforms with specific obligations. Dedicated rules for large or very large online platforms are also the object of draft legislative measures in the EU (in particular the Digital Services Act). These new provisions in the PIPL require data handlers that provide platform services to a “large” number (用户数量巨大) of users and have complex business types (类型复杂) to (i) establish an independent organization to supervise processing activities; (ii) follow the principles of openness, fairness and justice; (iii) immediately cease their service offerings when in serious violation of the law; and (iv) regularly publish reports on social responsibility of PI handling (Art. 58). While the threshold amount under this article remains undefined, the most recent version of the law makes a clear distinction between large-scale Internet platforms and small-scale handlers.
8. Cross-Border Transfers and Data Localization
Transfers of PI outside the borders of China are regulated in Chapter III, with the stated objective of ensuring that the transfer of data outside of China must be protected to the same extent as under Chinese law. This chapter is emblematic of the diversity of the objectives pursued by the text as described earlier. In these provisions, the legislator seeks both to promote responsible data transfers that respect the rights and interests of Chinese citizens, on the model of other provisions relating to transfers in “traditional” data protection laws, and to defend China’s strategic interests.
All transfers must pass a necessity test (they must be “necessary for business or other needs”, undefined). In addition, handlers must provide specific further notice to individuals, regardless of what mechanism for transfers is used (see below), and following this notice handlers have to obtain the individual’s specific consent (Art. 39).
Transfers must further meet at least one of the following conditions (Art. 38):
Undergoing a security assessment organized by the state cybersecurity and informatization departments in accordance with Art. 40, which states that operators of Critical Information Infrastructure (CII) and entities that transfer a large volume of PI must locally store personal information collected in China and undergo a further security assessment to transfer if necessary. This provision resembles article 37 of the Cybersecurity Law which similarly imposes restrictions of CII operators.
Obtaining certification conducted by a specialized body according to provisions by the cybersecurity and information departments. This provision in Art. 38(2) mirrors the equivalent provision on “approved certification mechanisms” in EU GDPR (Art. 46(2)(f)). Other similar mechanisms exist in both the Cybersecurity Review Measures and the Multi-Layer Protection System (MLPS) certification scheme under the Cybersecurity Law.
Concluding a contract with a foreign receiving party, specifying both parties’ rights and obligations, and supervising their activities to ensure they comply with standards provided in the law. The relevant state cybersecurity and information departments will provide standard contractual clauses (SCCs) to handlers for reference when entering into cross-border transfer agreements (Art. 38).
Complying with other conditions provided in laws or administrative regulations or by the State cybersecurity and informatization department (catch-all provision).
Each of these provisions deliberately opens up space for international negotiations on the interoperability of China’s PI overseas transfer framework, in the spirit of international cooperation that Art. 12 is intended to irrigate in the text: “the State Promotes mutual recognition of PI protection rules [or norms], standards etc. with other countries, regions and international organizations.”
This emphasis on mutual recognition appears to leave room for China to pursue its own bilateral and multilateral data transfer facilitation mechanism with other trading partners, such as those along the Belt and Road Initiative (BRI). Mutual recognition may take the form of recognition of SCCs, certification mechanisms from other jurisdictions, or other international agreements with relevant digital trade or protection provisions.
Interestingly, there is no adequacy regime mentioned in the cross-border data transfers chapter. This choice was no doubt carefully considered by the drafters of the text and can be traced to the work of influential Chinese academics who have presented the regulatory models of data transfers of the EU on the one hand, and the US on the other hand, as “exclusionary blocks” of transborder data flows that would be based on geography (“adequate” jurisdictions for one, and APEC economies participating in the CBPR system for the other).
The counterpart of these provisions that can anchor cooperation with other international actors is a series of provisions aimed at defending the strategic interests of China.
Notably, if it is necessary to transfer PI outside of China for international judicial assistance or administrative law enforcement, information handlers must file an application with the relevant competent authority for approval (Art. 41). The law stipulates that international treaties or agreements that China has become a party to may govern cross-border transfers and supersede the provisions of the law. It is not clear if this provision only concerns international judicial assistance, or also includes general cross-border data transfers.
The PIPL provides that where a country or region adopts discriminatory prohibitions, limitations or other similar measures against China in the area of data protection, China “may adopt retaliatory measures against said country or region” (Art. 43). This provision mirrors other retaliatory measures in the Data Security and Export Control Law.
Regardless, parties should take extra measures to comply with the law as foreign organizations or individuals that process PI that infringe Chinese citizens’ rights and interests or endanger China’s national security or public interest may be placed on a publicly available entity list that restricts other handlers from transferring personal information to them (Art. 42).
9. Implementation and Enforcement
The law does not create an independent authority dedicated to data protection enforcement. The Cyberspace Administration of China (CAC) is the primary body responsible for data protection enforcement, but there are several other regulators that may also administer the law.
In addition, similar to the PI Specification, the Chinese government may delegate further responsibility to a Technical Committee (e.g., TC260) to develop standards to clarify the meaning of the law and provide more guidance on enforcement.
The PIPL stipulates penalties for violations and non-compliance, including the suspension or termination of application programs unlawfully handling data. Non-compliance not only involves unlawfully processing personal information but also includes failing to adopt proper necessary security protection measures in accordance with further regulations.
The law makes a distinction between two types of violations. In the first instance, the departments fulfilling data protection duties will order a correction, confiscate unlawful income, and issue a warning.
If the data handler refuses to correct the violation, it will receive a fine of not more than 1 million RMB ($150,000).
Persons who are directly responsible and in charge may also receive a fine between 10,000 and 100,000 RMB ($1500–$15,000) (Art. 66).
In serious violations, the fine may be increased up to 50 million RMB ($7,500,000) or 5% of annual revenue for the prior fiscal year (Art. 66). The law does not specify whether annual revenue will be calculated on the basis of global turnover.
Acts deemed illegal under PIPL will be recorded and made public in the social credit system (Art. 67).
In addition, the PIPL stipulates that engaging in personal information handling activities that harm national security or the public interest also constitute violations (Art. 10) but no specific penalty is provided for such harms. Violations of the law will be publicly recorded and could lead to removal from serving as a director, supervisor, or senior manager of the relevant enterprise for a period of time.
Importantly, the PIPL provides a mechanism for individuals to receive compensation from data handlers through judicial redress for the loss (damage) they suffered or the benefit the handler obtains “if the processing of personal information infringes upon the rights and interests of the individuals” (Art. 69). If it is difficult to determine actual damages or the benefits unlawfully obtained, a People’s Court may take into account the relevant circumstances and render an appropriate award. The second version of the draft PIPL has reversed the burden of proof for the parties in a tort legal action against PI infringement, so that data handlers that cannot prove they are not at fault for the harm suffered will be liable. Additionally, when data handlers refuse an individual’s request to exercise data rights, that individual may file a lawsuit in a public court (Art. 50).
Finally, when a violation of the law infringes on the rights and interests of many individuals, the People’s Procuratorates, and the relevant enforcing agencies and departments may file a lawsuit with a People’s Court. One such example concerns the Civil Public Interest Litigation mechanism, which effectively operates as civil prosecution of large-scale violators of the law.