UPDATE: China’s Car Privacy and Security Regulation is Effective on October 1, 2021

The author thanks Hunter Dorwart for his contribution to this blog.

On August 20, 2021, the Cyberspace Administration of China (CAC) released an updated regulation on car privacy and data security that comes into force on October 1, 2021. The CAC initially published a draft on May 12 this year. The regulation is called “Several Provisions on the Management of Automobile Data Security (for Trial Implementation),” hereinafter “enacted regulation”. A press release with answers to reporters’ questions was also published on the same date as the enacted regulation. 

The purpose of the enacted regulation is to regulate automobile data processing activities, protect the legitimate rights and interests of individuals and organizations, safeguard national security and public interests, and promote the rational development and utilization of automobile data, in accordance with the “Network Security Law of the People’s Republic of China” and the “Data Security Law of the People’s Republic of China.”

The enacted regulation for car privacy and security is not the biggest news to come from China. On August 20, 2021, the National People’s Congress (NPC) of China adopted the first Chinese comprehensive data protection law, the Personal Information Protection Law (PIPL), less than a year after the first draft of the law was published. The PIPL will go into effect on November 1, 2021. For more about PIPL, see my colleague’s recent blog post.

The enacted regulation should be read in conjunction with other laws, regulations, and standards in China’s emerging data protection regime. In theory, laws passed by the National People’s Congress (NPC), such as the Personal Information Protection Law (PIPL), take priority over administrative regulations, such as the one detailed in this post and other regulations passed within China’s larger regulatory bureaucracy such as the recent market regulations for the ride-hailing industry. The CAC is technically not a government agency but rather a super-ministerial body directly under the State Council. It drafts regulations with the input and agreement of other agencies but operates largely independently of them.

This post is an update to a post from May 18, where I summarized the draft regulation (“Several Provisions on the Management of Automobile Data Security”). This post compares the draft regulation with the enacted regulation, highlights notable changes between the two regulations, and concludes with a summary of the notable differences.

Updated scope of covered entities: “Automobile data processors”

Automobile data processors are organizations that carry out automobile data processing activities, including automobile manufacturers, parts and software suppliers, dealers, maintenance organizations, and ride-hailing and -sharing companies (出行服务企业).

In contrast, the draft regulation applies to “operators”, which are defined as “automobile design, manufacturing, and service enterprises or institutions, including automobile manufacturers, component and software providers, dealers, maintenance organizations, online car-hailing companies, insurance companies, etc.”

Covered data: Distinction among “personal information,” “important data,” and “sensitive personal information,” plus a new data type “automobile data”

The enacted regulation adds a fourth type of data: “automobile data.” Automobile data covers personal information data and important data involved in the process of automobile design, production, sales, use, operation, and maintenance.

Automobile data processing includes the collection, storage, use, processing, transmission, provision, and disclosure of automobile data.

Personal information refers to information related to the identified or identifiable vehicle owner, driver, passenger, and people outside the vehicle that have been recorded electronically or by other means and does not include anonymized information. This mirrors China’s Personal Information Protection Law’s definition of personal information but narrows the scope to information specific to the use of vehicles.

In contrast, the definition of “personal information” in the draft regulation includes “the personal information of car owners, drivers, passengers, pedestrians, etc., as well as various information that can infer personal identity and describe personal behavior.”​​  

Sensitive personal information refers to personal information that, once leaked or illegally used, may cause discrimination against car owners, drivers, passengers, and people outside the car, or serious harm to personal and property safety, including vehicle location audio, video, images, and biometric data.

The draft regulation’s definition of “sensitive personal information” (found in Article 8) includes “​​data that can be used to determine illegal driving.” The enacted regulation does not have this and instead includes “personal information that once leaked or illegally used, may cause discrimination against car owners, drivers, passengers, and people outside the car, or serious harm to personal and property safety.” 

Important data refers to data that may endanger national security, public interests, or the legitimate rights and interests of individuals or organizations once it has been tampered with, destroyed, leaked, or illegally obtained or used, including:

(1) Geographical information, personnel flow, vehicle flow, and other data in important sensitive areas such as military management zones, national defense science and industry units, and party and government agencies at or above the county level;

(2) Data reflecting economic operation conditions such as vehicle flow and logistics;

(3) Operating data of the car charging network;

(4) Video and image data outside the car including face information, license plate information, etc.;

(5) Personal information involving more than 100,000 personal information subjects;

(6) Other data that may endanger national security, public interests, or the legitimate rights and interests of individuals and organizations as determined by the State Cyberspace Administration and the State Council’s development and reform, industry and information technology, public security, transportation, and other relevant departments.

The definitions of “important data” in the draft and enacted regulation are similar, and both regulations contain specific provisions for automobile data processors processing and sharing this type of data (see below).

Obligations based on the Fair Information Practice Principles (or the Personal Information Protection Principles)

Article 4 requires that the processing of automobile data by auto processors be legal, proper, specific, and clear. Automobile data processing must be directly related to the design, production, sale, use, operation, and maintenance of the vehicle. This Article is similar to language in the draft regulation’s Article 4; however, the enacted regulation has broader language.

Article 5 in both the draft regulation and the enacted regulation is about security and data protection. Automobile data processors must implement network security grade protection, strengthen automobile data protection, and perform data security obligations in accordance with the law.

Article 6 in both the draft regulation and the enacted regulation lists several privacy best practices that automobile data processors are encouraged to follow when processing automobile data (note that this Article applies to “automobile data”). The enacted regulation has four, while the draft had five. In the enacted regulation, the principle of data retention has been moved from Article 6 to Article 7. 

The principles in Article 6 are now:

  1. Process automobile data inside the vehicle unless it is necessary to send it outside the vehicle. 
  2. Non-collection by default. Unless the driver chooses otherwise, the default is to not collect automobile data.
  3. A principle of precision is determined by the capacity of automobile data processors to meet accuracy standards for processing data regarding the range, the coverage, and resolution of cameras, radars, etc. (“Principle of accuracy range application.”)
  4. Desensitization treatment, or anonymization and de-identification treatments whenever possible. 

Article 6(3) is notable because it appears to introduce a technical standard, which may be considered outside of the scope of a privacy (or data protection) regulation. However, the press release, which contains answers to reporters’ questions, notes that “[d]uring the formulation of the “Regulations,” both safety and development were emphasized…” and “driving safety” is mentioned throughout the enacted regulations. There may be more information provided through technical or industry standards about what exactly this means for manufacturers. 

Article 7 in both the draft regulation and the enacted regulation applies to “personal information” (not the broader “automobile data”) and requires automobile data processors to notify individuals through manuals, on-board display panels, voice, and other vehicle-related applications. 

The draft regulation lists four things the individual must be made aware of, while the enacted regulation lists seven. As noted above, retention has been moved to Article 7. The complete list of information that must be communicated to individuals is:

(1) The types of personal information processed, including vehicle location, driving habits, audio, video, images, and biometric features, etc.;

(2) The specific circumstances under which various types of personal information are collected and the ways and means to stop the collection;

(3) Purposes, uses, and methods of processing various types of personal information;

(4) Personal information storage location and retention period, or rules for determining the storage location and retention period;

(5) Ways and means of consulting and copying their personal information, deleting the information collected inside of the vehicle, and requesting to delete the personal information that has been provided outside the vehicle;

(6) The name and contact information of the contact person for exercising user rights;

(7) Other matters that should be notified as required by laws and administrative regulations.

While it is not clear from the text, it appears that Article 7 uses “individuals” in a slightly more narrow way than in the definition of “automobile data,” which includes people outside of the vehicle. In Article 7, it appears that “individuals” does not include people outside of the vehicle. This could be because there is a practical challenge of effectively communicating all of the above information to pedestrians, whose interactions with the vehicle may be fleeting. The provisions in Article 7 also appear to focus on types of data collected inside of a vehicle.

Article 8 and 9 in the draft and enacted regulation have been switched. 

Article 8 of the enacted regulation (draft regulation Article 9) is about consent to process personal information. When processing personal information, automobile data processors must obtain consent or comply with other requirements as stipulated by laws and administrative regulations. 

Notably, the new Article 8 mentions safety. This sentence states that (paraphrased and translated):

Due to the need to ensure the safety of driving, automobile data processors that cannot obtain personal consent to collect personal information from outside the vehicle and that share personal information outside of the vehicle should anonymize the data, including deleting the images or videos that can identify natural persons, partial facial information, or contour processing (对画面中的人脸信息等进行局部轮廓化处理等), which appears to mean using the features of someone’s face to create larger outlines of the person).

The draft regulation’s consent provision (Article 9) did not reference vehicle safety and instead recognized that it might be difficult in practice to obtain consent. 

Article 9 of the enacted regulation (draft regulation Article 8) lists requirements for processing “sensitive personal information” and notes that automobile data processors must also meet requirements under other applicable laws, administrative regulations, and mandatory national standards.

Again, one notable difference between the two regulations is that in this particular Article about processing “sensitive personal information”, the draft regulation mentions “driving safety” once, while the enacted regulation mentions “driving safety” thrice. Thus, this further illustrates a greater focus on the importance of balancing vehicle and driving safety with privacy and security.

The five requirements for processing “sensitive personal information” in Article 9 are:

(1) Having the purpose of directly serving individuals, including enhancing driving safety, intelligent driving, navigation, etc.;

(2) Notifying the necessity and impact on individuals through obvious means such as user manuals, on-board display panels, voice, and car use-related applications;

(3) Individual consent should be obtained, and the individual can independently set the time limit for consent;

(4) Under the premise of ensuring the safety of driving, prompt the collection of data in an appropriate manner to provide convenience for individuals to terminate the collection;

(5) If an individual requests deletion, the automobile data processor shall delete it within ten working days.

There are a few differences between the draft and enacted regulations that are worth noting here. 

  1. The requirement in 9(1) is almost identical, except that the enacted regulation does not explicitly include the purpose of “entertainment” and instead of “assisting driving,” uses “intelligent driving”. This requirement includes “etc.”, so this is presumably not a closed list, and “entertainment” may still be read in. 
  2. The requirement for notice in 9(2) is enhanced in the enacted regulation. The draft regulation (in 8(3)) requires that the individual be informed that sensitive personal information is collected. The enacted regulation requires that individuals be notified of the necessity and the impact on them. 
  3. The draft regulation requires that individuals be able to terminate the collection of sensitive personal data at any time (8(4)). Being able to stop the collection of this data at any time could have raised safety concerns if, for example, the driver terminated the collection of this data while the car is in operation without understanding how that data was being used to operate the car. The enacted regulation has updated language, which may address this concern. Individual consent is required, and the individual can set the time limit for consent (9(3)). 
  4. Related to 9(3), the enacted regulation states in 9(4) that “Under the premise of ensuring the safety of driving, prompt the collection status in an appropriate manner to provide convenience for individuals to terminate the collection.”
  5. The enacted regulation does not include Article 8(5), “Allow vehicle owners to conveniently view and structured inquiries about the collected sensitive personal information.”
  6. The draft regulation requires that sensitive personal data be deleted within two weeks upon request by the “driver.” The enacted regulation requires automobile data processors to delete that data within ten working days if requested by an “individual.”

The enacted regulation also includes a purpose and necessity requirement for the collection of a particular type of sensitive personal information: biometric data (such as fingerprints, voiceprints, human faces, heart rhythms). This appears to replace the draft regulation’s Article 10, which focuses on biometric data. Note that the press release states that “[r]egarding personal biometric information, it is clear that the car data processor has the purpose of enhancing driving safety and is necessary to collect it,” underscoring the sensitivity of biometric data and the high bar required to process this type of data.

Article 10 of the enacted regulation adds a new requirement for automobile data processors who process “important data.” In this situation, automobile data processors must conduct risk assessments in accordance with regulations and submit risk assessment reports to the provincial, autonomous region, and municipal network information departments and relevant departments.

The risk assessment report shall include the type, quantity, scope, storage location and period, use of the important data processed, the status of data processing activities and whether it is provided to a third party, the data security risks faced, and countermeasures, etc.

This appears to replace Article 11 of the draft regulation, which requires operators to report to the provincial network information department and relevant departments similar information about important data, but does not use the term “risk assessment.”

“Important Data”

Both the draft and enacted regulations contain several Articles pertaining to “important data”. (Articles 11-17 in the draft regulation and Articles 10-14 in the enacted regulation).

As noted above, automobile data processors are required to conduct a risk assessment when processing important data (Article 10).

Article 11 requires important data to be stored in China unless it is necessary to provide it overseas for business purposes. In this situation, there must be a security assessment (“exit safety assessment”) conducted by the State Cyberspace Administration of China with the relevant departments of the State Council.

Automobile data processors should not exceed the purpose, scope, method, type, and scale of important data specified in the exit safety assessment when this data is shared overseas (Article 12). The national cybersecurity and informatization department, in conjunction with relevant departments of the State Council, will verify the matters specified in the exit safety assessment by means of random inspections.

Article 13 requires automobile data processors who process important data to report the following automobile data security management information to the provincial, autonomous region, and municipal network information department and relevant departments before December 15 of each year:

(1) The name and contact information of the person in charge of automobile data security management and the contact person for processing user rights;

(2) The type, scale, purpose, and necessity of processing automobile data;

(3) Safety protection and management measures for automobile data, including storage location, period, etc.;

(4) Providing automobile data to domestic third parties;

(5) Car data security incidents and handling conditions;

(6) User complaints and handling of automobile data;

(7) Other automobile data security management conditions specified by the State Cyberspace Administration in conjunction with the State Council’s industry and information technology, public security, transportation, and other relevant departments.

In addition to the above requirements in Article 13, if automobile data processors share important data overseas there are additional reporting requirements found in Article 14. Articles 13 and 14 replace Articles 17 and 18 in the draft regulation.

Article 15 states that anyone participating in the exit safety assessment must not disclose the trade secrets or other confidential information learned during the assessment or use any information for purposes other than the assessment.

Article 16 appears to be an affirmation that China supports intelligent and connected vehicle operations and will cooperate with automobile data processors to strengthen and secure the network.

Article 17 requires auto data processors to establish appropriate complaints and reporting portals to handle user complaints.

Article 18 replaced article 20 in the draft regulation but has similar language regarding violations and penalties.

In summary, the processing and sharing overseas of “important data” may trigger the requirement of five separate assessments, reports, or inspections.

  1. Risk assessment: All automobile data processors who process important data should complete a risk assessment. Risk assessments are submitted to the provincial, autonomous region, and municipal network information departments and relevant departments (Article 10).
  2. Exit security assessment: If an automobile data processor finds it is necessary to share important data outside of China for business purposes, the automobile data processor must pass a security assessment organized by the national network information department in conjunction with the relevant departments of the State Council (Articles 11 and 12).
  3. Random inspection: The State Cyberspace Administration and relevant departments of the State Council will conduct random inspections to verify the information automobile data processors record in their exit security assessment (Article 12).
  4. Annual report: All auto data processors that process important data must file an annual automobile data security report (Article 13).
  5. Annual supplementary report: If an automobile data processor finds it is necessary to share important data outside of China for business purposes, the automobile data processor must supplement the annual report referenced in Article 13 with additional information (Article 14).

Summary of the Main Differences between the Draft Regulation and the Enacted Regulation

  1. The enacted regulation has a new defined term: “automobile data.” This term appears to be a shorthand to refer to both “personal information” and “important data”.  “Sensitive personal information” is a subset of “personal information.”
  2. The definition of “personal information” has been updated.
  3. “Sensitive personal information” is explicitly defined and somewhat clarified. The draft regulation’s definition of “sensitive personal information” includes “​​data that can be used to determine illegal driving”. The enacted regulation does not include this and instead refers to “personal information that once leaked or illegally used, may cause discrimination against car owners, drivers, passengers, and people outside the car, or serious harm to personal and property safety”.
  4. There is a new risk assessment requirement for processing “important data.”
  5. The draft regulation applies to “operators,” and the enacted regulation applies to “automobile data processors”. The definitions of both of these terms are different.
  6. The principle of data retention has been moved from Article 6 (privacy best practices) to Article 7 (requirements to notify individuals).
  7. More emphasis is placed on “driving safety” in the enacted regulation. For example, see Articles 8 and 9 and the press release. This further illustrates a greater focus on the importance of balancing vehicle and driving safety with privacy and security. This balance or, at times, tension will likely appear in both vehicle and automated vehicle regulations globally.
  8. The enacted regulation has an updated deletion request timeline for sensitive personal data.
  9. The enacted regulation has additional requirements and considerations for automobile data processors processing or sharing “important data” overseas.

Conclusion

Some challenges and considerations raised by the enacted regulation are 1) the coming into force date; 2) the introduction of what appears to be technical standards without further detail (e.g., Article 6(3)); and 3) that it is not always clear who exactly “individuals” refers to (e.g., Article 7). The coming into force date is October 1, 2021. Many of the requirements and best practices throughout the regulation likely require software, hardware, and design changes, and the tight deadline could prove challenging for automakers, where the average design and manufacturing span for a vehicle can be two-three years.

The enacted regulation highlights the complexity of the mobility ecosystem in two ways. First is the complexity of the data flows, evidenced by the regulation defining three types of data commonly processed by automobile data processors and introducing a new umbrella data term “automobile data.” Second is the complexity of parties involved, evidenced by the broad definition of “personal information,” which includes the vehicle owner, driver, passengers, and people outside of the vehicle. Similarly, “automobile data processors” is also defined fairly broadly and includes vehicle manufacturers, hardware and software suppliers, dealers, repair shops, and ride-hail companies. 

Also notable are the references to and emphasis on driving safety. As vehicles become more connected and automated, safety standards will increasingly influence data processing and thus privacy and data protection regulations, which will in turn impact vehicle design, operations, and safety. This circle of influence underscores the importance of privacy and data protection experts working closely with product designers and computer scientists. Privacy and data protection are slowly but surely moving from the risk and compliance office and into the product and engineering offices. As we travel along the road of car privacy and security regulations, this trend is sure to speed up. 

Image by Erdenebayar Bayansan from Pixabay 

Read More:

For an overview of China’s recently adopted Personal Information Protection Law, see China’s New Comprehensive Data Protection Law: Context, Stated Objectives, Key Provisions

China’s New Comprehensive Data Protection Law: Context, Stated Objectives, Key Provisions

The National People’s Congress (NPC) of China adopted on August 20, 2021  the first Chinese comprehensive data protection law, the Personal Information Protection Law (PIPL), less than a year after the first draft of the law was published. The NPC  thus concluded its legislative process that saw two additional markups of the law since October of last year. The PIPL will go into effect on November 1, 2021, but many companies within China are already coordinating with relevant enforcement agencies to comply. The adoption of the PIPL occurs in the wake of enhanced scrutiny over the tech sector by the Chinese government, and within a year from the entering into force of the new Civil Code which includes specific provisions for the protection of personal information.

The PIPL represents one pillar of China’s emerging data protection architecture that includes a myriad of other laws, industry-specific regulations, and standards. For instance, the recently enacted Data Security Law (DSL) sets forth a comprehensive list of requirements regarding the security and transferability of other types of data. It also establishes a “marketplace for data” to enable data exchange and digitalization. Additionally, the PIPL explicitly references China’s Constitution to provide a more firm legal basis for the implementation of its data protection goals (Art. 1). As such, the PIPL should not be viewed in isolation but rather examined in relation to these other regulatory tools that serve complimentary, albeit different purposes. 

The PIPL will mainly serve as China’s comprehensive data protection law, following in this respect the European approach which clearly distinguishes the protection of privacy from the protection of individuals with regard to the processing of their personal information (“data protection”). Its officially declared aims are thus: 

  1. to protect the rights and interests of individuals (为了保护个人信息权益),
  2. to regulate personal information processing activities (规范个人信息处理活动),
  3. to safeguard the lawful and “orderly flow” of data (保障个人信息依法有序自由流动),
  4. to facilitate reasonable use of personal information (促进个人信息合理利用) (Art. 1).

Throughout the legislative process, experts and privacy professionals have contributed to the work of the legislator, based among other things on their experience resulting from the implementation of EU’s General Data Protection Regulation (GDPR), which served as a reference in this exercise as in the drafting of previous renditions of data protection regulation such as the Personal Information Specification. It should be noted that it is not unusual for Chinese lawmakers to draw inspiration from texts and codes from European continental law traditions, China itself being a civil law jurisdiction.

The PIPL however serves several other objectives, which distinguishes it from the majority of data protection laws adopted to date around the world. Like its previous preparatory versions, the law has a distinct ‘national security’ flavor, particularly around its provisions on localization and cross-border transfers. 

The law also incorporates provisions that affirm China’s intention to defend its digital sovereignty: overseas entities which infringe on the rights of Chinese citizens or jeopardize the national security or public interests of China will be placed on a blacklist and any transfers of personal information of Chinese citizens to these entities will be restricted or even barred. China will also reciprocate against countries or regions that take “discriminatory, prohibitive or restrictive measures against China in respect of the protection of personal information” (Art. 43).

Last but not least, the PIPL clearly states China’s ambition to take a full part in international data protection discussions and thus assert its influence commensurate with the size of its economy and its growing technological capabilities. In particular, PIPL states China’s aims to actively contribute to the setting of global data protection standards ‘with other countries, regions, and international organizations’ (Art. 12). Related provisions of the PIPL echoe the stated ambitions of influencing international negotiations which relate directly or indirectly to international data transfers. The relevant provisions should therefore be read in the broader perspective of the Belt & Road Initiative (BRI) and the provisions relating to data transfers included in the Regional Comprehensive Economic Partnership (RCEP), conceived as a “regional backup” of negotiations on WTO e-trade rules, or so-called JSI negotiations. 

Overview of the PIPL

At a broader level, like most data protection laws modelled after the GDPR and other modern data protection laws, the PIPL sets forth a range of obligations, administrative guidelines, and enforcement mechanisms with respect to the processing of personal information. For instance it applies to very broadly defined “personal information” (PI) – which includes the “identifiable” element from the GDPR, includes lawful grounds for processing after the GDPR model, but with “legitimate interests” notably missing, and applies to “handling” of PI which includes “collection” of PI, meaning that a lawful ground is needed even before touching the data.

Additionally, the PIPL has rules for “handlers”, “joint handling” and “entrusted parties” with handling on behalf of the handlers (controllers, joint controllership, processors), including agreements to be put in place similarly to Art. 26 and Art. 28 agreements in the GDPR. It likewise applies in the public sector, as well as in the private sector, and has data localization requirements with regard to PI processed by state organs, critical infrastructure operators, and other handlers reaching a specific volume of PI processed.

The law regulates personal information transfers outside of China by imposing obligations on handlers before transferring data abroad such as complying with a security assessment by relevant authorities. It also mandates risk assessments (similar to a Data Protection Impact Assessment) for specific processing including automated decision-making and handling that could have “a major influence on individuals.”  Data handlers must also appoint Data Protection Officers (DPOs) in specific situations, depending on the volume of PI processed, and conduct regular compliance training.

Individuals are granted an extensive number of “rights in personal information handling activities”. The PIPL provides for individual rights very similar to GDPR’s “rights of the data subject”, such as erasure and access, and it specifically includes a right to obtain explanation and a right to data portability, the latter being introduced late in the third version of the draft law. 

Finally, the PIPL has a complex system of enforcement, including fines (that can go up to 5% of a company’s turnover) and administrative action (including orders to stop processing, or confiscation of unlawfully obtained profit), individual rights to obtain compensation, and civil public interest litigation cases through a public prosecutor.

The PIPL is divided into eight substantive chapters. Below we summarize the key aspects of the law and provide preliminary analysis.

1. Covered Data: Personal Information, Sensitive Information

The law applies to “handling” of “personal information”, both in the private and public sectors. Unlike the GDPR and its Article 4, the PIPL contains no general provision defining key terms of the law. Rather, notable definitions are scattered throughout the text and sometimes included directly in a more specific provision. Most of the definitions contained in the law are similar to, or using some identical wording to that of the GDPR, with notable variations. 

Broad Definition of Personal Information (个人信息) 

Personal information (PI) refers to “all kinds of electronic or otherwise recorded information related to an identified or identifiable natural person” (Art. 4). This definition largely mirrors the one set forth under the Cybersecurity Law and Chinese Civil Code, which define personal information as “the various types of electronic or otherwise recorded information that can be used separately or in combination with other information to identify the natural person.” Relatedly, it resembles the broad definition of “personal data” in the GDPR as “any information relating to an identified or identifiable natural person.” 

Open List of Sensitive Information (敏感个人信息).

The law further specifies that sensitive information means “personal information that once leaked or illegally used may cause discrimination against individuals or grave harm to personal or property security, including information on race, ethnicity, religious beliefs, individual biometric features, medical health, financial accounts, individual location tracking, etc.” (Art. 28). Information handlers may only process sensitive personal information for specific purposes and when sufficiently necessary (Art. 28). Handlers shall further obtain specific consent if they rely on individual consent for processing (Art. 29).

Notably, the definition of sensitive information diverges from the GDPR’s “special categories” of personal data, which is a closed list of specific types of personal data (see Article 9). The PIPL has an open list of sensitive data, centering its definition around the notion of harm and the data’s potential discriminatory impact on individuals. Unlike the GDPR, the PIPL contains no specific provision regarding the processing of PI related to criminal convictions and offences. In contrast, financial information and location data are included in the scope of sensitive PI, to the effect of subjecting their handling to obtaining the individual’s specific consent. The extension of the scope of sensitive information to cover financial information has been noted in other jurisdictions like India. 

Finally, the PIPL treats biometric data as sensitive PI. This qualification resonates with the specific provisions in the law on facial recognition (see below). 

De-identification and Anonymization are Defined Separately

“De-identification” and “anonymization” are defined in the very last substantive provision of the PIPL (art. 73), with anonymized PI being specifically excluded from the material scope of the law (Art. 4). 

Personal Information Handling (个人信息的处理) Defined Broadly to Cover the Entire Lifecycle of PI 

PI handling includes “the collection, storage, use, processing, transmission, provision, publishing, and other such activities” of personal information (Art. 4). This resembles the definition of processing under the GDPR and it means that the rules proposed in the law apply to the collection of PI as well as to the use of PI. Since the law includes lawful grounds for handling PI (see below), this means that such grounds must be in place before a handler collects the data.

2. Covered Actors, both in the Public and Private sectors

Information Handler or “Controller” (个人信息处理者); “Entrusted parties”/Processors; 

Conventionally the law has rules on controllers, joint controllers and processors. Parties or individuals become personal information handlers when they “independently determine the purposes and means for handling of PI.” (Art. 73). PI handlers appear to function in a similar manner under the Chinese draft law as data controllers under the GDPR.  Note that the law nor any other legislation in China specifically uses the term “controller” (控制者). 

The law also provides rules on joint controllership, “where two or more PI handlers jointly decide on a PI handling purpose and handling method”. Joint handlers have to “agree” on the rights and obligations of each, and the agreement should not affect the possibility for an individual to exercise their rights against any of them; they are also jointly liable for breaches (Art. 21). 

Handlers can entrust handling of PI to third parties under very similar conditions to the controller-processor relationship in the GDPR. They have to conclude an agreement, which has to refer to the purpose of the entrusted handling, the handling method, categories of PI, the rights and duties of both sides etc., including ways to “conduct supervision of the PI handling activities of the entrusted party” (Art. 22); this resembles the audit clause in Art. 28 GDPR agreements. 

Finally, If the data processing agreement with a third party becomes ineffective or invalid or is otherwise terminated, the third party must not store the personal information and either return it to the data handler or delete it (Art. 21). 

Public and Private Sectors Covered 

Similarly to GDPR’s household exemption, the PIPL does not apply to processing by natural persons for their personal or family affairs (Art. 72). The PIPL further applies to processing activities in both the private and public sectors.

No private organization is exempt from the scope of the law; on the other hand, certain companies (“those who provide important Internet platform services, a large number of users, and a complex type of personal information processor”) are subject to reinforced obligations (see below).

In the public sector, all “state organs” (i.e. public authorities and agencies, at central, provincial or municipal levels, including the courts and lawmaking bodies) must abide by a specific set of obligations with regard to the handling of PI in the context of the performance of their statutory duties (Art. 34). These rules apply alongside the wide array of information management rules which apply to the Chinese administration. 

The same obligations apply to organisations which handle PI on behalf of state agencies based on specific laws or regulations (Art. 37). This includes notifying individuals and obtaining their consent when handling PI (for instance to share PI between administrations), unless notification will impede the performance of their statutory obligations, or specific statutory rules impose secrecy (Art. 18 & 35). State agencies must store the PI they process in China and can only transfer such data overseas “if it is really necessary to provide the PI overseas” and after undergoing a security risk assessment that “may require support and assistance from relevant departments” (Art. 36). 

State agencies that fail to comply with the law will be subject to oversight from a superior authority and will have to make corrections in their processing activities (Art. 68). The individuals directly responsible or in charge of the agency’s decisions that lead to non-compliance face personal liability for their actions, including termination, suspension, and fines (see below). 

3. Territorial Scope, with Extraterritorial Long Arm 

The PIPL principally applies to organizations and individuals’ handling PI of natural persons within the jurisdiction of China. This applies to any organization or person physically within the borders of China. 

Article 3 of the law extends the territorial scope of the law to processing activities by handlers established outside of China, similarly to the GDPR, if one of the following circumstances is present:

This third paragraph has no direct equivalent in GDPR and leaves a margin of discretion to the public authorities to further extend the long-arm jurisdiction of the law in cross-border scenarios.

The law requires handlers outside of China that process personal information of covered persons to establish a dedicated entity or appoint a representative within China to be responsible for matters related to their information processing (Art. 52). Such entities must provide the name and contact method of the representative to the relevant departments responsible for implementing the law. 

4. Lawful Grounds and Personal Information Protection Principles

PI handlers must have a valid legal basis to handle PI from one of the following circumstances (Art. 13):

Like in GDPR, the seven legal grounds to process PI in Art. 13 are provided on an equal basis, meaning that there is no preferred order in which they should be relied on. This provision is significant because it distinguishes the PIPL from the data protection provisions previously applicable to the collection and processing of PI in China, including in the Cybersecurity Law and Civil Code, which are mainly centered on consent. This evolution will be welcomed by practitioners and legal scholars alike, who in China as elsewhere criticized overly relying on consent as an insufficient and artificial protection of the individual. The consent-centric framework was also criticized as being too rigid, with companies long advocating for additional legal bases, such as performance of a contract or legitimate interests, e.g. for anti-fraud purposes. 

However, this list does not include the broad concept of the data handler’s “legitimate interests” as it has existed for more than twenty years in the EU data protection framework, and now in a significant number of data protection laws in Asia Pacific and other regions. It is nonetheless possible that further administrative regulations will add a similar ground for processing. The insertion in the final version of PIPL of specific provisions relating to the processing of employee data, in order to exempt the processing of their PI from the collection of their consent (Art. 13(2)), would be indicative of this development in the mind of the legislator. This  provision built on precedents  found in local regulations like the recent Shenzhen data regulation, which expand exceptions to consent for employers to process employees’ data for certain purposes. 

Focus on Consent

Despite the evolution seen in relation to the addition of several other lawful grounds for handling data, consent is present throughout the law outside the legal bases provision. For example, the law includes a prohibition for handlers to disclose the PI they are processing, unless they obtain specific consent (Art. 25). When processing publicly available PI, handlers should only process them in a way that reasonably conforms with the purpose for which they were published and if they are processed for different purposes, the individual needs to be notified and asked for consent. Consent also plays a role in further uses of facial recognition installed in public venues, which is allowed as a matter of principle only for the purpose of ensuring public security (Art. 26). 

The conditions for validity of consent are that it must be informed (given under a “precondition of knowledge” about the processing), and it must be given “in a voluntary and explicit statement of wishes” (Art. 14).  Laws or administrative regulations may require “specific consent” or “written consent” for specific processing of PI (Art. 14).

Similar to the GDPR, individuals have the right to withdraw consent (Art. 15). Inspired by the “freely given” validity condition under the GDPR, the PIPL also provides that handlers may not refuse to provide products or services on the basis that an individual does not consent to the processing of PI or withdraws their consent, except in those situations where the PI is “necessary” for the provision of products or services (Art. 16).  Handlers must provide a convenient way for individuals to withdraw consent and the withdrawal will not affect any processing activity that took place before consent was revoked (Art. 15)

Stringent Rules for Children PI

Handlers that process PI of children younger than 14 must obtain consent from parents (Art. 31). This marks a departure from earlier drafts that mandated obtaining consent only when the processor knew or should have known that the data subject was 14 or under. This provision was notably introduced in the 2018 PI Security Specification. The inclusion of a more strict standard also reflects the fact that such information now constitutes sensitive information under the PIPL and thus handlers must comply with additional requirements (see above). Lawmakers in China have recently concentrated on the online protection of minors: they have passed a revised Law on Minors with more stringent restrictions for companies that offer online services to minors and even taken recent enforcement actions in this space. 

Personal Information Protection Principles 

The PIPL recognises key data protection principles very similar to the Fair Information Protection Principles (FIPPs) and other data protection principles included in the GDPR: 

5. Automated Decision-Making (自动化决策) and Facial Recognition

The PIPL contains specific provisions governing the use of Automated Decision-Making (ADM). Under the law, ADM refers to “activities that use personal information to automatically analyze, assess, and decide via computer programs, individual behaviors and habits, interests and hobbies, or situations relating to finance, health, or credit status.” (Art 73(2)).

The law mandates specific processing obligations: 

Facial Recognition Rules for Public Areas

In public areas, the installation of image collection or personal identity recognition equipment must be used to safeguard public security and observe relevant State regulations (Art. 27). Safeguarding public security is the only legally recognized purpose for such activities and individuals must be notified of the information collection process. Information gathered in this way cannot be published or disclosed, except where individuals’ specific consent is obtained, or laws and regulations provide otherwise.

The provisions mirror a growing public awareness in China of the need to regulate the private use of facial recognition technology in public areas more strictly. For instance, in a famous case that received widespread attention both within and outside of China, a lawyer successfully sued a zoo for using the technology in order to monitor and admit guests. Although the plaintiff won on a theory of breach of contract, he was unable to change the zoo’s policy but plans to appeal the case to the highest court.  

In addition, several cities have passed regulations limiting or banning the use of these technologies including Tianjin, Nanjing, and Xuzhou. 

6. Rights for Individuals Over Their PI (Access, Erasure; Specific Right to Explanation, Portability)

Under the law, individuals should receive explicit notice “before handling” occurs and be provided with relevant information including the identity and contact method of the personal information handler, any subsequent third party handlers, the purpose and methods of PI handling, the categories of handled PI, the retention period, and procedures for individuals to exercise their individual rights under the law (Art. 17).  Notably, Art. 18 specifies that handlers do not need to notify individuals if a state secrecy law is in place or under “emergency” circumstances, which could include threats to public security, health or safety. 

The law stipulates that personal information handlers establish mechanisms to accept and process applications from individuals to exercise their rights (Art. 50). If the information handlers reject the request, they must explain the reason for doing so. The law recognizes the following rights:

These rights extend beyond an individual’s death and can be exercised by close relatives of the decedent, unless otherwise arranged by the decedent during their lifetime (Art. 49).

7. Obligations of Data Handlers related to Accountability: “DPIAs”, “DPOs”, Data Breach Notification, Training Obligations; Large Scale Distinctions 

Chapter V provides for a number of obligations of PI handlers. Art. 52 stipulates that handlers that handle information reaching quantities outlined by the competent authorities must appoint persons responsible for PI protection and publish the name and contact details of such persons. When the handler discovers a personal information leak, they must immediately adopt remedial measures and notify competent authorities (Art. 57). Where adopted measures can effectively avoid data breach harms, information handlers do not have to notify individuals. 

Handlers must conduct a personal information protection influence assessment to determine whether the handling purposes and methods are lawful, the influence such processing has on individuals, and whether the adopted security measures are adequate to ensure compliance (Art. 55). The assessment should take into account the handling of sensitive personal information, automated decision making, subsequent processing done by third parties, cross-border transfers, and other processing activities that have a significant impact on personal rights and interests. 

Data handlers must also adopt corresponding technical security measures such as encryption, de-identification, etc.; determine operational limits for information handling, regularly conducting security education and training for employees, and regularly conducting compliance audits with specialized entities (Art. 54).  Additionally, they must formulate and organize the implementation of incident response plans.

One of the new provisions of the law, introduced just before its adoption, targets large online platforms with specific obligations. Dedicated rules for large or very large online platforms are also the object of draft legislative measures in the EU (in particular the Digital Services Act). These new provisions in the PIPL require data handlers that provide platform services to a “large” number (用户数量巨大) of users and have complex business types (类型复杂) to (i) establish an independent organization to supervise processing activities; (ii) follow the principles of openness, fairness and justice; (iii) immediately cease their service offerings when in serious violation of the law; and (iv) regularly publish reports on social responsibility of PI handling (Art. 58). While the threshold amount under this article remains undefined, the most recent version of the law makes a clear distinction between large-scale Internet platforms and small-scale handlers. 

8. Cross-Border Transfers and Data Localization

Transfers of PI outside the borders of China are regulated in Chapter III, with the stated objective of ensuring that the transfer of data outside of China must be protected to the same extent as under Chinese law. This chapter is emblematic of the diversity of the objectives pursued by the text as described earlier. In these provisions, the legislator seeks both to promote responsible data transfers that respect the rights and interests of Chinese citizens, on the model of other provisions relating to transfers in “traditional” data protection laws, and to defend China’s strategic interests. 

All transfers must pass a necessity test (they must be “necessary for business or other needs”, undefined). In addition, handlers must provide specific further notice to individuals, regardless of what mechanism for transfers is used (see below), and following this notice handlers have to obtain the individual’s specific consent (Art. 39). 

Transfers must further meet at least one of the following conditions (Art. 38): 

Each of these provisions deliberately opens up space for international negotiations on the interoperability of China’s PI overseas transfer framework, in the spirit of international cooperation that Art. 12 is intended to irrigate in the text: “the State Promotes mutual recognition of PI protection rules [or norms], standards etc. with other countries, regions and international organizations.” 

This emphasis on mutual recognition appears to leave room for China to pursue its own bilateral and multilateral data transfer facilitation mechanism with other trading partners, such as those along the Belt and Road Initiative (BRI).  Mutual recognition may take the form of recognition of SCCs, certification mechanisms from other jurisdictions, or other international agreements with relevant digital trade or protection provisions. 

Interestingly, there is no adequacy regime mentioned in the cross-border data transfers chapter. This choice was no doubt carefully considered by the drafters of the text and can be traced to the work of influential Chinese academics who have presented the regulatory models of data transfers of the EU on the one hand, and the US on the other hand, as “exclusionary blocks” of transborder data flows that would be based on geography (“adequate” jurisdictions for one, and APEC economies participating in the CBPR system for the other). 

The counterpart of these provisions that can anchor cooperation with other international actors is a series of provisions aimed at defending the strategic interests of China.

Notably, if it is necessary to transfer PI outside of China for international judicial assistance or administrative law enforcement, information handlers must file an application with the relevant competent authority for approval (Art. 41). The law stipulates that international treaties or agreements that China has become a party to may govern cross-border transfers and supersede the provisions of the law. It is not clear if this provision only concerns international judicial assistance, or also includes general cross-border data transfers. 

The PIPL provides that where a country or region adopts discriminatory prohibitions, limitations or other similar measures against China in the area of data protection, China “may adopt retaliatory measures against said country or region” (Art. 43). This provision mirrors other retaliatory measures in the Data Security and Export Control Law. 

Regardless, parties should take extra measures to comply with the law as foreign organizations or individuals that process PI that infringe Chinese citizens’ rights and interests or endanger China’s national security or public interest may be placed on a publicly available entity list that restricts other handlers from transferring personal information to them (Art. 42).

9. Implementation and Enforcement 

The law does not create an independent authority dedicated to data protection enforcement. The Cyberspace Administration of China (CAC) is the primary body responsible for data protection enforcement, but there are several other regulators that may also administer the law. 

In addition, similar to the PI Specification, the Chinese government may delegate further responsibility to a Technical Committee (e.g., TC260) to develop standards to clarify the meaning of the law and provide more guidance on enforcement.  

The PIPL stipulates penalties for violations and non-compliance, including the suspension or termination of application programs unlawfully handling data. Non-compliance not only involves unlawfully processing personal information but also includes failing to adopt proper necessary security protection measures in accordance with further regulations.

The law makes a distinction between two types of violations. In the first instance, the departments fulfilling data protection duties will order a correction, confiscate unlawful income, and issue a warning.

Acts deemed illegal under PIPL will be recorded and made public in the social credit system (Art. 67).

In addition, the PIPL stipulates that engaging in personal information handling activities that harm national security or the public interest also constitute violations (Art. 10) but no specific penalty is provided for such harms. Violations of the law will be publicly recorded and could lead to removal from serving as a director, supervisor, or senior manager of the relevant enterprise for a period of time. 

Importantly, the PIPL provides a mechanism for individuals to receive compensation from data handlers through judicial redress for the loss (damage) they suffered or the benefit the handler obtains “if the processing of personal information infringes upon the rights and interests of the individuals” (Art. 69). If it is difficult to determine actual damages or the benefits unlawfully obtained, a People’s Court may take into account the relevant circumstances and render an appropriate award. The second version of the draft PIPL has reversed the burden of proof for the parties in a tort legal action against PI infringement, so that data handlers that cannot prove they are not at fault for the harm suffered will be liable.  Additionally, when data handlers refuse an individual’s request to exercise data rights, that individual may file a lawsuit in a public court (Art. 50). 

Finally, when a violation of the law infringes on the rights and interests of many individuals, the People’s Procuratorates, and the relevant enforcing agencies and departments may file a lawsuit with a People’s Court.  One such example concerns the Civil Public Interest Litigation mechanism, which effectively operates as civil prosecution of large-scale violators of the law.

Blog Summary: Ethical Concerns and Challenges in Research using Secondary Data

Digital data is a strategic asset for business. It is also an asset for researchers seeking to answer socially beneficial questions using company held data. Research using secondary data introduces new challenges and ethical concerns for research administrators and research ethics committees, like IRBs. 

FPF Senior Researcher, AI & Ethics, Dr. Sara Jordan, analyzes some of these concerns in the post Research Using Secondary Data: New Challenges and Novel Resources for Ethical Governance published on Ampersand: The PRIM&R Blog

In the informative analysis, Dr. Jordan examines five key points:

Public Responsibility in Medicine and Research (PRIM&R)’s blog Ampersand is a discussion space for thoughtful conversations on research ethics and oversight. Learn more here.

Future of Privacy Forum Launches its Asia-Pacific Office Led by Dr. Clarisse Girot

The Future of Privacy Forum’s Asia-Pacific office launches today in Singapore under the leadership of Dr. Clarisse Girot. As Director of the FPF Asia-Pacific office, she is responsible for developing and implementing FPF’s strategy in the world’s biggest and most populated region.

We have asked Dr. Girot to share her thoughts for FPF’s blog on Asia’s role in the global conversation on data protection law and policy, its key characteristics, as well as the balanced role FPF Asia will hold in the region.

1) What are the key characteristics of data protection law and policy developments in Asia? 

Dr. Girot: The first characteristic of these developments is their intensity, and legal uncertainty as a result: legislative and regulatory activity in data privacy in Asia is happening at an unprecedented pace. The adoption of EU GDPR has been an essential driver behind these developments, but endemic factors also play a role: major data breaches have occurred in most countries, and a series of privacy controversies have confronted citizens, companies, and governments, this region just like elsewhere. At the same time, the massive development of the digital economy, the emergence of unicorns, and local tech champions have sharpened the ambitions of many countries to enhance their status as tech powers, data hubs, etc. 

Major jurisdictions have thus proceeded to a general upgrade of pre-existing laws, including some of the oldest data protection frameworks in the world (New Zealand, South Korea, or Japan, for instance), but also more recent, say “second generation” laws like Singapore’s PDPA. China’s National People’s Congress is expected to adopt the Personal Information Protection Law (PIPL) before the end of the year, while the process of adopting the Data Protection Bill of India could be concluded, after several years of waiting. These texts will be groundbreaking, if only because China and India are the two most populous countries in the world.

The second characteristic, amplified by the first, is the fragmentation of legal systems, with variations that make any regional compliance effort particularly difficult. These variations also create gaps in protection that are prejudicial to individuals, as well as gaps in the capacities of regulators to cooperate. 

The third is that APAC jurisdictions have very diverse cultural norms and economic priorities. Some laws are more explicitly grounded in human rights considerations, some are focused on “digital sovereignty” and national security, and many others are more broadly concerned with advancing consumer protection and building confidence in the digital economy. In practice, recently the three objectives appeared to be more and more often mixed. The Covid-19 crisis has added an additional layer of uncertainty in this region which is at the same time extraordinarily dynamic, contrasted, and therefore very complex to apprehend.

2) How do you see Asia’s role in the global debate on data protection law and policy? 

Dr. Girot: It is always difficult to generalize when speaking of “Asian developments” as we speak of European or US developments, because of the great contrasts which exist from one country to another, moreover in the largest region of the world. It is certain, on the other hand, that Asia in general, and certain countries in particular, will take an increasingly important place in the global privacy debate.

This evolution will happen because of a simple numerical factor. But also because the scope of privacy discussions has, for historical reasons, been defined essentially in “the West”, and many countries in the region now want to contextualize privacy developments and adapt pre-existing models to make them their own. The majority of jurisdictions in the region will find it difficult to subscribe to an approach focused on privacy as a fundamental human right, but an approach exclusively focused on trade and consumer protection will not suit them either. Cultural differences, but even more important differences in economic development will necessarily lead the countries of the region to weigh in on global developments.

3) How can FPF Asia help key stakeholders in the region with finding a balance between the utility of tech and data, on one hand, and the need to protect rights and interests of communities and individuals, on the other hand?

Dr. Girot: The creation of FPF Asia is incredibly timely. In a region crossed by different currents, and in a high voltage world, it can take the discrete but indispensable role of a transformer and “universal adapter.” The needs and expectations are immense, for there is today in the region no platform for cooperation that is at the same time informal, neutral, and endowed with both the expert background and the capacities necessary to build an inclusive regional dialogue on privacy and data protection.

My 4-year experience in a comparable position at the Asian Business Law Institute (ABLI) has taught me that cooperation can take many different forms, depending on the current priorities of regulators and countries. They may be information sessions and informal exchanges with regulators, convening different stakeholders to discuss cutting edge data protection issues, providing expertise in emerging technology in a way that is useful for policymakers, facilitating peer-to-peer exchanges for businesses active in the region, providing independent analyses and reports on key policy topics… Asia’s skies are the limit! 

If you have any questions about engaging with The Future of Privacy Forum on Global Privacy and Digital Policymaking contact Dr. Gabriela Zanfir-Fortuna, Director for Global Privacy, at [email protected].

Stay updated: Sign up for FPF Asia-Pacific email alerts.

FPF Launches Asia-Pacific Region Office, Global Data Protection Expert Clarisse Girot Leads Team

The Future of Privacy Forum (FPF) has appointed Clarisse Girot, PhD, LLM, an expert on Asian and European privacy legislation, to lead its new FPF Asia-Pacific office based in Singapore as Director. This new office expands FPF’s international reach in Asia and complements FPF’s offices in the U.S., Europe, and Israel, as well as partnerships around the globe.
 
Dr. Clarisse Girot is a privacy professional with over twenty years of experience in the privacy and data protection fields. Since 2017, Clarisse has been leading the Asian Business Law Institute’s (ABLI) Data Privacy Project, focusing on the regulations on cross-border data transfers in 14 Asian jurisdictions. Prior to her time at ABLI, Clarisse served as the Counsellor to the President of the French Data Protection Authority (CNIL) and Chair of the Article 29 Working Party. She previously served as head of CNIL’s Department of European and International Affairs, where she sat on the Article 29 Working Party, the group of EU Data Protection Authorities, and was involved in major international cases in data protection and privacy.
 
“Clarisse is joining FPF at an important time for data protection in the Asia-Pacific region. The two most populous countries in the world, India, and China, are introducing general privacy laws, and established data protection jurisdictions, like Singapore, Japan, South Korea, and New Zealand, have recently updated their laws,” said FPF CEO Jules Polonetsky. “Her extensive knowledge of privacy law will provide vital insights for those interested in compliance with regional privacy frameworks and their evolution over time.”
 
FPF Asia-Pacific will focus on several priorities by the end of the year including hosting an event at this year’s Singapore Data Protection Week. The office will provide expertise in digital data flows and discuss emerging data protection issues in a way that is useful for regulators, policymakers, and legal professionals. Rajah & Tann Singapore LLP is supporting the work of the FPF Asia-Pacific office.
 
“The FPF global team will greatly benefit from the addition of Clarisse. She will advise FPF staff, advisory board members, and the public on the most significant privacy developments in the Asia-Pacific region, including data protection bills and cross-border data flows,” said Gabriela Zanfir-Fortuna, Director for Global Privacy at FPF. “Her past experience in both Asia and Europe gives her a unique ability to confront the most complex issues dealing with cross-border data protection.”
 
As over 140 countries have now enacted a privacy or data protection law, FPF continues to expand its international presence to help data protection experts grapple with the challenges of ensuring responsible uses of data. Following the appointment of Malavika Raghavan as Senior Fellow for India in 2020, the launch of the FPF Asia-Pacific office further expands FPF’s international reach.
 
Dr. Gabriela Zanfir-Fortuna leads FPF’s international efforts and works on global privacy developments and European data protection law and policy. The FPF Europe office is led by Dr. Rob van Eijk, who prior to joining FPF worked at the Dutch Data Protection Authority as Senior Supervision Officer and Technologist for nearly ten years. FPF has created thriving partnerships with leading privacy research organizations in the European Union, such as Dublin City University and the Brussels Privacy Hub of the Vrije Universiteit Brussel (VUB). FPF continues to serve as a leading voice in Europe on issues of international data flows, the ethics of AI, and emerging privacy issues. FPF Europe recently published a report comparing the regulatory strategy for 2021-2022 of 15 Data Protection Authorities to provide insights into the future of enforcement and regulatory action in the EU.
 
Outside of Europe, FPF has launched a variety of projects to advance tech policy leadership and scholarship in regions around the world, including Israel and Latin America. The work of the Israel Tech Policy Institute (ITPI), led by Managing Director Limor Shmerling Magazanik, includes publishing a report on AI Ethics in Government Services and organizing an OECD workshop with the Israeli Ministry of Health on access to health data for research.
 
In Latin America, FPF has partnered with the leading research association Data Privacy Brasil, provided in-depth analysis on Brazil’s LGPD privacy legislation and various data privacy cases decided in the Brazilian Supreme Court. FPF recently organized a panel during the CPDP LatAm Conference which explored the state of Latin American data protection laws alongside experts from Uber, the University of Brasilia, and the Interamerican Institute of Human Rights.
 

Read Dr. Girot’s Q&A on the FPF blog. Stay updated: Sign up for FPF Asia-Pacific email alerts.
 

FPF and Leading Health & Equity Organizations Issue Principles for Privacy & Equity in Digital Contact Tracing Technologies

With support from the Robert Wood Johnson Foundation, FPF engaged leaders within the privacy and equity communities to develop actionable guiding principles and a framework to help bolster the responsible implementation of digital contact tracing technologies (DCTT). Today, seven privacy, civil rights, and health equity organizations signed on to these guiding principles for organizations implementing DCTT.

“We learned early in our Privacy and Pandemics initiative that unresolved ethical, legal, social, and equity issues may challenge the responsible implementation of digital contact tracing technologies,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “So we engaged leaders within the civil rights, health equity, and privacy communities to create a set of actionable principles to help guide organizations implementing digital contact tracing that respects individual rights.”

Contact tracing has long been used to monitor the spread of various infectious diseases. In light of COVID-19, governments and companies began deploying digital exposure notification using Bluetooth and geolocation data on mobile devices to boost contact tracing efforts and quickly identify individuals who may have been exposed to the virus. However, as DCTT begins to play an important role in public health, it is important to take necessary steps to ensure equity in access to DCTT and understand the societal risks and tradeoffs that might accompany its implementation today and in the future. Governance efforts that seek to better understand these risks will be better able to bolster public trust in DCTT technologies. 

“LGBT Tech is proud to have participated in the development of the Principles and Framework alongside FPF and other organizations. We are heartened to see that the focus of these principles is on historically underserved and under-resourced communities everywhere, like the LGBTQ+ community. We believe the Principles and Framework will help ensure that the needs and vulnerabilities of these populations are at the forefront during today’s pandemic and future pandemics.”

Carlos Gutierrez, Deputy Director, and General Counsel, LGBT Tech

“If we establish practices that protect individual privacy and equity, digital contact tracing technologies could play a pivotal role in tracking infectious diseases,” said Dr. Rachele Hendricks-Sturrup, Research Director at the Duke-Margolis Center for Health Policy. “These principles allow organizations implementing digital contact tracing to take ethical and responsible approaches to how their technology collects, tracks, and shares personal information.”

FPF, together with Dialogue on Diversity, the National Alliance Against Disparities in Patient Health (NADPH), BrightHive, and LGBT Tech, developed the principles, which advise organizations implementing DCTT to commit to the following actions:

  1. Be Transparent About How Data Is Used and Shared. 
  1. Apply Strong De-Identification Techniques and Solutions. 
  1. Empower Users Through Tiered Opt-in/Opt-out Features and Data Minimization. 
  1. Acknowledge and Address Privacy, Security, and Nondiscrimination Protection Gaps. 
  1. Create Equitable Access to DCTT. 
  1. Acknowledge and Address Implicit Bias Within and Across Public and Private Settings.
  1. Democratize Data for Public Good While Employing Appropriate Privacy Safeguards. 
  1. Adopt Privacy-By-Design Standards That Make DCTT Broadly Accessible. 

Additional supporters of these principles include the Center for Democracy and Technology and Human Rights First.

To learn more and sign on to the DCTT Principles visit fpf.org/DCTT.

Support for this program was provided by the Robert Wood Johnson Foundation. The views expressed here do not necessarily reflect the views of the Foundation.

The Spectrum of AI: Companion to the FPF AI Infographic

In December of 2020, FPF published the Spectrum of Artificial Intelligence – An Infographic Tool, designed to visually display the variety and complexity of Artificial Intelligence (AI) systems, the fields this science is based on, and a small sample of the use cases these technologies support for consumers. Today, we are releasing the white paper: The Spectrum of Artificial Intelligence – Companion to the FPF AI Infographic to expand on the information included in this educational resource, and describe in more detail how the graphic can be used as an aide in education or in developing legislation or other regulatory guidance around AI-based systems. We identify additional, specific use cases for various AI technologies and explain how the differing algorithmic architecture and data demands present varying risks and benefits. We discuss the spectrum of algorithmic technology and demonstrate how design factors, data use, and model training processes should be considered for specific regulatory approaches.

Artificial intelligence is a term with a long history. Meant to denote those systems which accomplish tasks otherwise understood to require human intelligence, AI is directly connected to the development of computer science but is based on a myriad of academic fields and disciplines, including philosophy, social science, physics, mathematics, logic, statistics, and ethics. AI, as it is designed and used today, is made possible by the recent advent of unprecedentedly large datasets, increased computational power, advances in data science, machine learning, and statistical modeling. AI models include programming and system design based on a number of sub-categories, such as robotics, expert systems, scheduling and planning systems, natural language processing, neural networks, computer sensing, and machine learning. In many cases of consumer facing AI, multiple forms of AI are used together to accomplish the overall performance goal specified for the system. In addition to considerations of algorithmic design, data flows, and programming languages, AI systems are most robust for use in equitable and stable consumer uses when human designers also consider limitations of machine hardware, cybersecurity, and user-interface design.

This paper outlines the spectrum of AI technology, from rules-based and symbolic AI to advanced, developing forms of neural networks, and seeks to put them in the context of other sciences and disciplines, as well as emphasize the importance of security, user interface, and other design factors. Additionally, we seek to make this understandable through providing specific use cases for the various types of AI and by showing how the different architecture and data demands present specific risks and benefits.

Across the spectrum, AI is a combination of various types of reasoning. Rules-based or Symbolic AI is the form of algorithmic design wherein humans draft a complete program of logical rules for a computer to follow. Newer AI advances, particularly in machine learning systems based on neural networks, are able to power computers that carry out the programmer’s initial design but then adapt based on what the system can glean from patterns in the data. These systems can score the accuracy of their results and then connect those outcomes back into the code in order to improve the success of succeeding iterations of the program.

AI systems operate across a broad spectrum of scale. Processes using these technologies can be designed to seek solutions to macro level problems like environmental challenges: undetected earthquakes, pollution control, and other natural disaster responses. They are also incorporated into personal level systems for greater access to commercial, educational, economic, and professional opportunities. If regulation is to be effective, it should focus on both technical details and the underlying values and rights that must be protected from adverse uses of AI, to ensure that AI is ultimately used to promote human dignity and welfare.

A .pdf version of the printed paper is available here.

Now, On the Internet, EVERYONE Knows You’re a Dog

An Introduction to Digital Identity

By Noah Katz and Brenda Leong

What is Digital Identity?

As you go through your day, everyone wants to know something about you. The bouncer at a bar needs to know you are over 21, the park ranger wants to see your fishing license, your doctor has to review your medical history, a lender needs to know your credit score, the police must check your driver’s license, and airport security has to confirm your ticket and passport. In the past, you would have a separate piece of paper or plastic for each of these exchanges, but the Information Revolution has caused a dramatic shift to digital and virtual channels. Shopping, banking, healthcare, gaming, even therapy and exercise, are all activities that can now be performed partially or entirely using online platforms and services. However, systems using digital transactions struggle to establish trust around personal identification because personal login credentials vary for every account and passwords are forgettable and frequently insecure. Largely because of this “trust gap,” the equivalent of personal identity credentials like a passport and driver’s license have notably lagged other services in moving to an online format. That is starting to change. 

Potentially, all these tasks can be accomplished with a single “digital identity,” a system of electronically stored attributes and/or credentials that uniquely identify an individual person. Digital identity systems vary in complexity. At its most basic, a digital ID would simply recreate a physical ID in a digital format. For instance, digital driver’s licenses are coming to augment, and possibly eventually replace, the physical driver’s license or state-issued ID we carry now. Available via an app that provides the platform for verification and security, these digital versions can be used in the same way as a physical ID, to provide for authentication of our individual attributes like a unique ID number (Social Security number), birthdate (Driver’s License), citizenship (passport) or other government-issued, legal aspects of personhood. 

At the other end of the spectrum, a fully integrated digital identity system would provide a platform for a complete wallet and verification process, usable both online and in the physical world. That is, it would authenticate you as an individual, as above, but also tie to all the accounts and access rights you hold, including the credentials represented by those attributes. Such a system would enable you to share or verify your school transcripts or awarded degrees, provide your health records, or access your online accounts and stored data. This sort of credentialing program can also act as an electronic signature, timestamp, or seal for financial and legal transactions. 

There are a variety of technologies being explored to provide this type of platform, although there is no clear consensus or standard at this time. There are those who advocate for “self-sovereign identity,” wherein a blockchain-based platform allows individuals to directly control the release of their own information to designated recipients. There are also mobile-based systems that use a combination of cloud and local storage via a mobile device in conjunction with an app to offer a single identity verification platform. 

These proposed identification systems are being designed for use in commercial circumstances as well as for accessing government systems and benefits. In countries with national identification cards (most countries other than the U.S and the UK), the national ID may come to be issued digitally even sooner. Estonia has the most advanced example of such a system, and everyone there who has been issued a government ID can provide digital signatures and authentication via their mobile platform as well as use it as a driver’s license, a health service identifier, a pass to public transport, a travel document, to vote, or for banking.

The concept of named spaces and creating unique identifiers is older than the internet itself. Started in 1841, and fully computerized by the 1970s, Dun and Bradstreet operate a database containing detailed credit information on over 285 million businesses, making them one of the key providers of analytics and other services for over a century of commercial data. Their unique 9-digit identifier is the foundation of their entire system. 

The UK’s Companies House, the state registrar for public companies, traces back to the Joint Stock Companies Act of 1844, and the formation of shareholder enterprises. Like D&B, companies are recorded on a public register, but with the added requirement to include the personal data that the Registrar maintains on company personnel; for example, Directors must record name, address, occupation, nationality, and date of birth. The advent of mandatory passports in the twentieth century, along with pseudonymous identification of individuals by governments, such as with Social Security numbers, furthered this trend of personal records based on unique individual identities (and not without controversy).

With the advent of the internet, online identities exploded into every facet of financial, commercial, entertainment, and educational or professional lives, and today many people have tens, if not hundreds, of personal accounts and affiliations, each with a unique, numbered, or assigned digital record. Maintaining awareness of all our accounts has become almost impossible, much less having adequate and accurate oversight as to the security of each vendor, site, or set of login credentials. The possibility of transitioning these accounts to be interoperable with a single, secure digital ID is now becoming more feasible due to advances in mobile technology, faster and less expensive biometric systems, and the availability of cloud services and fast processing capabilities.  

How Digital Identity Works

In the past, a new patient at the doctor’s office must have provided at least three separately sourced documents: a driver’s license, a health insurance card, and medical history. Even now, many offices take a physical license or insurance card and make a digital copy for their file. A digital wallet would allow a new patient to identify themselves, provide proof of insurance, and medical history all at once, via their smartphone or other access option.

Importantly, by digitally sending a one-time identity token directly to the vendor or health provider, these systems can be designed to provide the authentication or verification of a status or credential (e.g., an awarded degree), without physically handing over a smartphone and without providing the underlying data (the full transcript). By granularly releasing identity data as necessary for authorization, an ID holder will not have to include or provide more information than is needed to complete the transaction. That bouncer at the bar simply must know you are “over 21,” not your actual birthdate, much less your name and address.

An effective digital ID must be able to perform at least four main tasks:

To authenticate an individual, the system must ensure that a person is who they claim to be, protecting sufficiently against both false negatives – not allowing access to the legitimate account holder, as well as false positives – wrongly allowing access to unauthorized individuals. Security best practices require that authentication be accomplished via a multi-factor system, requiring two of the three options: something you know (password or pin code, security question), something you have (a smart card, specific mobile device, or USB token), or something you are (a biometric). 

[NOTE: a biometric is a unique, measurable physical characteristic which can be used to recognize or identify a specific individual. Facial images, fingerprints, and iris scans samples are all examples of biometrics. For authentication purposes, such as in the digital identity systems under discussion, biometrics are matched on a 1:1 or 1:few process against an enrolled template. The template, specific to the system provider and not interoperable with other systems, may be stored locally on the device, or in cloud storage. However, since operational or circumstantial considerations may preclude the use of biometrics in all cases, systems intended for mass access must offer alternatives as well. The details of biometric systems and the particular risks and benefits thereof are beyond the scope of this article, but while not all digital identity systems are based on biometrics, most will likely include some form of biometric within their authentication processing.]

Once an ID holder is authenticated, the specific attributes or credentials must be verified. This involves confirming that the ID holder has earned or been issued the credentialed attributes they are claiming, whether from a financial institution, an employer, an educational institution, or a government agency.  

Authentication and verification may be all that is required for some transactions, but where needed, the system must also be able to confirm authorization, that is, to determine what the person is allowed to see or do within a given system. Successful privacy and security for businesses, organizations, and governments require the enforcement of rigorous access controls. Who can see certain data is not always the same as the person authorized to change or manipulate it. The person authorized to manipulate or process it may not be entitled to share it or delete it.  Successfully setting and enforcing these controls is one of the most challenging features for any organization which collects and uses personal data.

While the first three steps in digital identity systems exist in various forms already, a truly universal digital identity is likely to be successful at a mass scale only if it is federated, meaning that the ID must be usable across institutional, sectoral, and geographic boundaries. A federated identity system would be the most significant departure from every account-specific login or access process that exists today. To accomplish such wide-ranging compatibility will require a common set of open standards that institutions, sectors, and countries establish collaboratively and implement globally.  A digital wallet will need to seamlessly grant access across many networks, from a movie theater verifying over-17 aged entrants, banks processing loan applications, hospitals establishing patient status and access records, airports for boarding, or amusement parks and stadiums providing scheduled performances and perks.

Global banking and financial services are leading the way on this sort of broad implementation. Therefore, online banking is a constructive digital ID use case:

Banks are motivated to forge ahead on such digital identity systems to improve fraud detection, streamline “know your customer” compliance processes, increase their ability to stop money-laundering and other finance-related crimes, and offer superior customer experiences. But by creating secure, standardized digital identity access for online banking, they may also offer engagement to the large portions of the globe that are currently un- or under-banked, and/or who have minimal governmental infrastructure around legal identity systems.

The Challenges and Opportunities

Privacy, security, equity, data abuse, and user control all raise unique challenges and opportunities for digital ID. 

Digital identity, if not deployed correctly, may undermine privacy rights. If not implemented responsibility, and carefully controlled with both technical and legal safeguards, digital IDs might allow for increased location-tracking and user profiling, already a concern with cell phone technology. Blockchain technology, if not designed carefully, creates a public, immutable record of information exchanges, regarding where, when, and why a digital ID was requested. And a given digital ID provider may have too much power, with the ability to suppress ID holders from accessing their digital accounts. However, digital IDs also offer the possibility of increased privacy protection if systems are effectively designed to share only the minimum, necessary information, and identification is only established up to the level necessary for the particular exchange or transaction.  “Privacy by design,” as well as appropriate defaults and system controls, can prohibit any of the network participants, including the operator, from complete access to the users’ transactions, particularly if accompanied by appropriate legislative or regulatory boundaries. 

Digital ID likewise has both pros and cons for security. While not perfect, Digital IDs are generally harder to lose or be counterfeited than a physical document; and offer significantly greater security than an individual’s hundreds of separate login credentials across sites of uncertain levels of protection. However, poor adherence to best practices may result in a centralized location of personal and sensitive information, which may become a more appealing target for hackers, and increase the risk of a mass compromise of information. Centralized databases can be minimized by local storage of authenticating factors like biometrics, and distributed storage of other data with appropriate security measures and controls.

Inequities can occur along a number of different axes. Since digital identity designs may reflect society’s biases, it is important to mandate and continually measure inclusion and performance. For instance, the UK’s digital ID framework requires the ID issuers to submit an annual exclusion report. In addition, because not everyone has a smartphone or internet access, digital IDs risk increasing inequities among those with limited connectivity. Without reliable digital access, groups that have traditionally struggled may continue to lack the privileges that digital IDs promise to provide. On the other hand, according to the World Bank, an estimated 1.1 billion people worldwide cannot officially prove or establish their legal identity. In countries or situations without clear legal processes, or lacking information infrastructures, digital identity systems have the potential to provide people who do have smartphones or internet access the ability to receive healthcare, education, finance, and other essential services. Even those without access to a digital device could use a secure physical form, like a barcode, to maintain their digital identity.  

Policy Impacts and Conclusion

Individuals are used to the ability to easily control the use of their physical documents. When you hand your passport to a TSA agent, you observe who is seeing it and how it is being used. A digital ID holder will need these same assurances, understanding, and trust. Therefore, ideally, users should be able to identify every instance that their identity was accessed by a vendor. Early systems, like the Australian Digital License App, give citizens some control over their credentials by enabling users themselves to specify the information to share or display. Legislative bodies and regulatory agencies designing or controlling such systems should work closely with industry representatives, security experts, consumer protection organizations, civil rights advocates, and other stakeholders to ensure fair systems are established and monitored appropriately. 

Transparency of development, and public adoption processes and procurement systems will be vital to public trust in any such systems, whether privately or publicly operated. In some cases, such systems may even help educate and increase awareness among users of the information that is already collected and held about them, and where and how it is being used, as well as make it easier for them to exert control easily for the necessary sharing of their information.

Digital identification, integrated to greater or lesser degrees, seems an almost inevitable next step in our digital lives, and overall offers promising opportunities to improve our access and controls over the information already spanning the internet about us. But it is crucial that moving forward, digital ID systems are responsibly designed, implemented, and regulated to ensure the necessary privacy and security standards, as well as prevent the abuse of individuals or the perpetuation of inequities against vulnerable populations.   While there are important cautions, digital identity has the potential to transform the way we interact with the world, as our “selves” take on new dimensions and opportunities. 

At the intersection of AI and Data Protection law: Automated Decision-Making Rules, a Global Perspective (CPDP LatAm Panel)

On Thursday, 15th of July 2021, the Future of Privacy Forum (FPF) organised during the CPDP LatAm Conference a panel titled ‘At the Intersection of AI and Data Protection law: Automated Decision Making Rules, a Global Perspective’. The aim of the Panel was to explore how existing data protection laws around the world apply to profiling and automated decision making practices. In light of the European Commission’s recent AI Regulation proposal, it is important to explore the way and the extent to which existing laws already protect individuals’ fundamental rights and freedoms against automated processing activities driven by AI technologies. 

The panel consisted of Katerina Demetzou, Policy Fellow for Global Privacy at the Future of Privacy Forum; Simon Hania, Senior Director and Data Protection Officer at Uber; Prof. Laura Schertel Mendes, Law Professor at the University of Brasilia and Eduardo Bertoni, Representative for the Regional Office for South America, Interamerican Institute of Human Rights. The panel discussion was moderated by Dr. Gabriela Zanfir–Fortuna, Director for Global Privacy at the Future of Privacy Forum.

web 3120321 1920

Data Protection laws apply to ADM Practices in light of specific provisions and/or of their broad material scope

To kick-off the conversation, we presented preliminary results of an ongoing project led by the Global Privacy Team at FPF on Automated Decision Making (ADM) around the world. Seven jurisdictions were presented comparatively, among which five already have a general data protection law in force (EU, Brazil, Japan, South Korea, South Africa), while two jurisdictions have data protection bills expected to become laws in 2021 (China and India).

For the purposes of this analysis, the following provisions are being examined: the definitions of ‘processing operation’ and ‘personal data’ given that they are two concepts essential for defining the material scope of the data protection law; the principles of fairness and transparency and legal obligations and rights that relate to these two principles (e.g., right of access, right to an explanation, right to meaningful information etc.); provisions that specifically refer to ADM and profiling (e.g., Article 22 GDPR). 

The preliminary findings are summarized in the following points:

Uber, Ola and Foodinho Cases: National Courts and DPAs decide on ADM cases on the basis of existing laws

In recent months, Dutch national Courts and the Italian Data Protection Authority have ruled on complaints brought by employees of the ride-hailing companies Uber and Ola and the food delivery company Foodinho challenging the companies’ decisions reached with the use of algorithms. Simon Hania summarised the key points of these court decisions. It is important to mention that all cases appeared in the employment context and were all submitted back in 2019. That means that more outcomes of ADM cases may be expected in the near future. 

The first Uber case referred to the matching between drivers and riders which, as the Court judged, qualifies as an ADM based solely on automated means that however does not lead to any ‘legal or similarly significant effect’. Therefore, Article 22 GDPR is not applicable. The second Uber case referred to the deactivation of drivers’ accounts due to signals of potentially fraudulent behaviour or misconduct of the drivers. There, the Court judged that Article 22 is not applicable because, as the company proved, there is always human intervention before an account is deactivated and the actual final decision is made by a human. 

The third example presented was the Ola case, whereby the Court decided that the company’s decision of withholding drivers’ money as an act of penalizing their misconduct qualifies as an automated decision based solely on automated means , producing a ‘legal or similarly significant effect’, and therefore Article 22 GDPR applies. 

In the last example of Foodinho, the decision-making on how well couriers perform was indeed deemed by the Court to be based solely on automated means and it produced a significant effect on the data subjects (the couriers). The problem was highlighted to be the way that the performance metrics were established and specifically on the accuracy of the profiles created. They were not sufficiently accurate for the significance of the effect they would bring. 

This last point spurs the discussion on the importance of the principle of data accuracy which is an often overlooked principle. Having accurate data as the basis for decision making is crucial in order to avoid discriminatory practices and achieve fairer AI systems. As Simon Hania emphasised, we should have information available that is fit for purpose in order to reach accurate decisions. This suggests that the data minimisation principle should be understood as data rightsizing and not as requiring to purely minimise information processed for a decision to be reached.

LGPD: Brazil’s Data Protection Law and its application to ADM practices

The LGPD, Brazil’s recently passed data protection law, is heavily influenced by the EU GDPR in general, but also specifically on the topic of ADM processing. Article 20 of the LGPD protects individuals against decisions that are made only on the basis of automated processing of personal data, when these decisions “affect their interests”. The wording of this provision seems to suggest a wider protection than the relevant Article 22 of the GDPR which requires that the decision “has a legal effect or significantly affects the data subject”. Additionally, Article 20 LGPD provides individuals with a right to an explanation and with the right to request a review of the decision. 

In her presentation, Laura Mendes highlighted two points that require further clarification: first of all, it is still unclear what the definition of “solely automated” is. Secondly, it is not clear what the degree of the review of the decision should be and also whether the review shall be performed by a human. There are two provisions core to the discussion on ADM practices: 

(a) Art 6 IX LGPD, which introduces the principle of non-discrimination as a separate data protection principle. According to it, processing of data shall not take place for “discriminatory, unlawful or abusive purposes”. 

(b) Article 21 LGPD reads “The personal data relating to the regular exercise of rights by the data subjects cannot be used against them.” As Laura Mendes suggested, Article 21 LGPD is a provision with great potential regarding non-discrimination in ADM. 

Latin America & ADM Regulation: there is no homogeneity in Latin American laws but the Ibero-American Network seems to be setting a common tone

In the last part of the panel discussion, a wider picture of the situation in Latin America was presented. It should be clear that Latin America does not have a common, homogenous approach towards data protection. For example, while Argentina has had a data protection law since 2000 for which it obtained an adequacy decision with the EU, Chile is in the process of adopting a data protection law but still has a long way to go, while Peru, Ecuador and Colombia are trying to modernize their laws. 

The American Convention of Human Rights recognises a right to privacy and a right to intimacy, but there is still no interpretation by the Interamerican Court of Human Rights neither on the right to data protection nor specifically on the topic of ADM practices. However, it should be kept in mind that as was the case with Brazil’s LGPD, the GDPR has highly influenced Latin America’s approach to data protection. Another common reference for Latin American countries is the Ibero-American Network which, as Eduardo Bertoni explained in his talk, while it does not produce hard law, it publishes recommendations that are followed by the respective jurisdictions. Regarding specifically the discussion on ADM, Eduardo Bertoni mentioned the following initiatives taken in the Ibero-American space:

Main Takeaways

While there is an ongoing debate around the regulation of AI systems and automated processing in light of the recently proposed EU AI Act, this panel brought attention to existing data protection laws which are equipped with provisions that protect individuals against automated processing operations. The main takeaways of this panel are the following:

Looking ahead, the debate around the regulation of AI systems will continue to be heated and the protection of fundamental rights and freedoms in light of automated processing operations will still appear as a top priority. In this debate we should keep in mind that the proposed AI Regulation is being introduced in an already existing system of laws, as is data protection law, consumer law, labour law, etc. It is important to have clear what is the reach and the nature of these laws so as to be able to identify the gap that the AI Regulation or any other future proposal comes to fill. This panel highlighted that ADM and automated processing is not unregulated. On the contrary, current laws protect individuals by putting in place binding overarching principles, legal obligations and rights. At the same time, Courts and national authorities have already started enforcing these laws. 

Watch a recording of the panel HERE.

Read more from our Global Privacy series:

Insights into the future of data protection enforcement: Regulatory strategies of European Data Protection Authorities for 2021-2022

Spotlight on the emerging Chinese data protection framework: Lessons learned from the unprecedented investigation of Didi Chuxing

A new era for Japanese Data Protection: 2020 Amendments to the APPI

Image by digital designer from Pixabay

Insights into the Future of Data Protection Enforcement: Regulatory Strategies of European Data Protection Authorities for 2021-2022

The Future of Privacy Forum released a report that brings “Insights into the future of data protection enforcement: Regulatory strategies of European Data Protection Authorities for 2021-2022”.

The European Data Protection Authorities (DPAs) are arguably the most powerful data protection and privacy regulators in the world, having been granted by the European Union’s General Data Protection Regulation (GDPR) broad powers and competences, in addition to independence. With GDPR enforcement visibly ramping up in the past year, it is important to get insight into the key enforcement areas targeted by regulators, as well as understanding what are those complex or sensitive personal processing activities where DPAs plan to provide compliance guidelines or to shape public policy.

Last year, FPF released a report called New Decade, New Priorities: A summary of twelve European Data Protection Authorities’ strategic and operational plans for 2020 and beyond. It outlined EU DPAs’ regulatory priorities for 2020 and the ensuing years, based on the documents of a strategic nature released by such authorities in the first half of last year. Since then, most DPAs have published their 2020 annual reports, as well as novel short or long-term strategies. These shed light on the areas to which DPAs are likely to devote significant regulatory efforts and resources, with a broad scope: guidance, awareness-raising, corrective measures, and enforcement actions.

We have compiled and analyzed these novel strategic documents, describing where different DPA strategies have touchpoints and noteworthy particularities. The report contains links to and translated summaries of 15 DPAs’ strategic documents from DPAs in France (FR), Portugal (PT), Belgium (BE), Norway (NO), Sweden (SE), Ireland (IE), Bulgaria (BG), Denmark (DK), Finland (FI), Latvia (LV), Lithuania (LT), Luxembourg (LU) and Germany (Bavaria). The analysis also includes documents published by the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS). These documents complement or replace the ones that were included in our 2020 report.

screen shot 2021 07 28 at 1.11.36 pm

Some of our main conclusions include: