Automated Decision-Making Systems: Considerations for State Policymakers

In legislatures across the United States, state lawmakers are introducing proposals to govern the uses of automated decision-making systems (ADS) in record numbers. In contrast to comprehensive privacy bills that would regulate collection and use of personal information, automated decision-making system (ADS) bills in 2021 specifically seek to address increasing concerns about racial bias or unfair outcomes in automated decisions that impact consumers, including housing, insurance, financial, or governmental decisions.

So far, ADS bills have taken a range of approaches, with most prioritizing restrictions on government use and procurement of ADS (Maryland HB 1323); requiring inventories of government ADSs currently in use (Vermont H 0236); impact assessments for procurement (CA AB-13); external audits (New York A6042); or outright prohibitions on the procurement of certain types of unfair ADS (Washington SB 5116). A handful of others would seek to regulate commercial actors, including in insurance decisions (Colorado SB 169), consumer finance (New Jersey S1943), or the use of automated decision-making in employment or hiring decisions (Illinois HB 0053, New York A7244).

At a high level, each of these bills share similar characteristics. Each proposes general definitions and general solutions that cover specific, complex tools used in areas as varied as traffic forecasting and employment screening. But the bills are not consistent with regard to requirements and obligations. For example, among the bills that would require impact assessments, some require impact assessments universally for all ADS in use by government agencies, others would require impact assessments only for specifically risky uses of ADS. 

As states evaluate possible regulatory approaches, lawmakers should: (1) avoid a “one size fits all” approach to defining automated decision-making by clearly defining the particular systems of concern; (2) consult with experts in governmental, evidence-based policymaking; (3) ensure that impact assessments and disclosures of risk meet the needs of their intended audiences; (4) look to existing law and guidance from other state, federal, and international jurisdictions; and (5) ensure appropriate timelines for technical and legal compliance, including time for building capacity and attracting qualified experts.

1. Avoid “one size fits all” solutions by clearly identifying the automated decision-making systems of concern.

An important first step to the regulation of automated decision-making systems (“ADS”) is to identify the scope of systems that are of concern. Many lawmakers have indicated that they are seeking to address automated decisions such as those that use consumer data to create “risk scores,” creditworthiness profiles, or other kinds of profiles that materially impact our lives and involve the potential for systematic bias against categories of people. But, the wealth of possible forms of ADS and the many settings for their use can make defining these systems in legislation very challenging. 

Automated systems are present in almost all walks of modern life, from managing wastewater treatment facilities to performing basic tasks such as operating traffic signals. ADS can automate the processing of personal data, administrative data, or myriad forms of other data, through the use of tools ranging in complexity from simple spreadsheet formulas, to advanced statistical modeling, rules-based artificial intelligence, or machine learning. In an effort to navigate this complexity, it can be tempting to draft very general definitions of ADS. However, these definitions risk being overbroad and capturing ADS systems that are not truly of concern — i.e. because they do not impact people or carry out significant decision-making. 

For example, a definition such as “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision-making” (New Jersey S1943) would likely include a wide range of traditional statistical data processing, such as estimating average number of vehicles per hour on a highway to facilitate automatic lane closures in intelligent traffic systems. This would place an additional, significant requirement for conducting complex impact assessments for many of the tools behind established operational processes. In contrast, California’s AB-13 takes a more tailored approach, aiming to regulate “high-risk application[s]” of algorithms that involve “a score, classification, recommendation, or other simplified output,” that support or replace human decision-making, in situations that “materially impact a person” (12115(a)&(b)).

In general, compliance-heavy requirements or prohibitions on certain practices may be appropriate only for some high-risk systems. The same requirements would be overly prescriptive or infeasible for systems powering ordinary, operational decision-making. Successfully distinguishing between high-risk use cases and those without significant, personal impact will be crucial to crafting tailored legislation that addresses the targeted, unfair outcomes without overburdening other applications.

Lawmakers should ask questions such as:

These questions can help guide legislative definitions and scope. A “one size fits all” solution not only risks creating burdensome requirements in situations where they are not needed, but is also less likely to ensure stronger requirements in situations where they are needed — leaving potentially biased algorithms to operate without sufficient review or standards to address resulting outcomes that are biased or unfair. An appropriate definition is a critical first step for effective regulation. 

2. Consult with experts in governmental, evidence-based policymaking. 

Evidence-based policymaking legislation, popular in the late 1990s and early 2000s, required states to construct systems to eradicate human bias by employing data-driven practices for key areas of state decision-making, such as criminal justice, student achievement predictions, and even land use planning. For example, as defined by the National Institute of Corrections, the vision for implementing evidence based practice in community corrections is “to build learning organizations that reduce recidivism through systematic integration of evidence-based principles in collaboration with community and justice partners” (see resources at the Judicial Council of California 2021). The areas chosen for application of evidence-based policymaking are presently causing high degrees of concern about applications of ADS as the mechanisms for ensuring use of evidence and elimination of subjectivity.  Examining the goals envisioned in evidence-based policymaking legislation may clarify whether ADS are appropriate tools for satisfying those goals. 

In addition to consulting the policies encouraging evidence-based making in order to identify the goals for automated decision-making systems (ADSs) the evidence-based research findings reviewed to support this legislation can also direct legislators to contextually relevant, expert, sources of data that should be incorporated into ADS or into the evaluation of ADS. Likewise, legislators should reflect on the challenges to implementation of effective evidence-based decision-making, such as unclear definitions, poor data quality, challenges to statistical modelling, and a lack of interoperability of public data sources, as these challenges are similar to those complicating use of ADS.

3. Ensure that impact assessments and disclosures of risk meet the needs of their intended audiences.

Most ADS legislative efforts aim to increase transparency or accountability through various forms of mandated notices, disclosures, data protection impact assessments, or other risk assessments and mitigation strategies. These requirements serve multiple, important goals, including helping regulators understand data processing, and increasing internal accountability through greater process documentation. In addition, public disclosures of risk assessments benefit a wide range of stakeholders, including: the public, consumers, businesses, regulators, watchdogs, technologists, and academic researchers.

Given the needs of different audiences and users of such information, lawmakers should ensure that impact assessments and mandated disclosures are leveraged effectively to support the goals of the legislation. For example, where legislators intend to improve equity of outcomes between groups, they should include legislative support for tools to improve communication to these groups and to support incorporation of these groups into technical communities. Where sponsors of ADS bills intend to increase public awareness of automated decision-making in particular contexts, legislation should require and fund consumer education that is easy to understand, available in multiple languages, and accessible to broad audiences. In contrast, if the goal is to increase regulator accountability and technical enforcement, legislation might mandate more detailed or technical disclosures be provided non-publicly or upon request to government agencies.

The National Institutes of Standards and Technology (NIST) has offered recent guidance on explainability in artificial intelligence that might serve as a helpful model for ensuring that impact assessments are useful for the multiple audiences they may serve. The NIST draft guidelines suggest four principles for explainability for audience sensitive, purpose driven, ADS assessment tools: (1) Systems offer accompanying evidence or reason(s) for all outputs; (2) Systems provide explanations that are understandable to individual users; (3) The explanation correctly reflects the system’s process for generating the output; and (4) The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output (p.2). These four principles shape the types of explanations needed to ensure confidence in algorithmic or automated decision-making systems (ADSs), such as explanations for user benefit, for social acceptance, for regulatory and compliance purposes, for system development, and for owner benefit (p. 4-5). 

Similarly, the European Commission’s Guidelines on Automated Individual Decision-Making and Profiling provides recommendations for complying with the GDPR’s requirement that individual users be given “meaningful information about the logic involved.” Rather than requiring a complex explanation or exposure of the algorithmic code, the Commission explains that a controller should find simple ways to tell the data subject the rationale behind, or the criteria relied upon to reach a decision. This may include which characteristics are considered to make a decision, the source of the information, and its relevance. It should not be overly technical, but sufficiently comprehensive for a consumer to understand the reason for the decision.

Regardless of the audience, mandated disclosures should be used cautiously as, especially when made public, such disclosures can also create certain risks, such as opportunities for data breaches, exfiltration of intellectual property (IP), or even attacks on the algorithmic system which could identify individuals or cause the systems to behave in unintended ways. 

4. Look to existing law and guidance from other state, federal, and international jurisdictions.

Although US lawmakers have specific goals, needs, and concerns driving legislation in their jurisdictions, there are clear lessons to be learned from other regimes with respect to automated decision-making. Most significantly, there has been a growing, active wave of legal and technical guidance in the European Union in recent years regarding profiling and automated decision-making, following the passage of the GDPR. Lawmakers may also seek to ensure interoperability with the newly passed California Privacy Rights Act (CPRA) or Virginia Consumer Data Protection Act (VA-CDPA), both of which create requirements that impact automated decision-making, including profiling. Finally, the Federal Trade Commission enforces a number of laws that could be harnessed to address concerns about biased or unfair decision-making. Of note, Singapore is also a leader in this space, launching their Model AI Governance Framework in 2019. It is useful to understand the advantages or limitations of each model and to recognize the practical challenges of adapting systems for each jurisdiction. 

General Data Protection Regulation (GDPR)

The EU General Data Protection Regulation (GDPR) broadly regulates public and private collection of personal information. This includes a requirement that all data processing be fair (Art. 5(1)(a)). The GDPR also creates heightened safeguards specifically for high risk automated processing that impact individuals, especially with respect to decisions that produce legal, or other significant, effects concerning individuals. These safeguards include organizational responsibilities (data protection impact assessments); and individual empowerment provisions (disclosures, and the right not to be subject to certain kinds of decisions based solely on automated processing).

California Privacy Rights Act (CRPA)

The California Privacy Rights Act (CPRA), passed via Ballot Initiative in 2020, expands on the California Consumer Privacy Act (CCPA)’s requirements that businesses comply with consumer requests to access, delete, and opt-out of the sale of consumer data.

While the CPRA does not create any direct consumer rights or organizational responsibilities with respect to automated decision-making, its consumer access rights includes access to information about “inferences drawn . . . to create a profile” (Sec. 1798.140(v)(1)(K)) and most likely information about the use of the consumer’s data for automated decision-making. 

Notably, the CPRA added a new definition of “profiling” to the CCPA, while authorizing the new California oversight agency to engage in rulemaking. In alignment with the GDPR, the CPRA defines “profiling” as “any form of automated processing of personal Information . . . to evaluate certain personal aspects relating to a natural person, and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements” (1798.140(z)). 

The CPRA authorizes the new California Privacy Protection Agency to issue regulations governing automated decision-making, including “governing access and opt‐out rights with respect to businesses’ use of [ADS], including profiling and requiring businesses’ response to access requests to include meaningful information about the logic involved in such decision-making processes, as well as a description of the likely outcome of the process with respect to the consumer.” (1798.185(a)(16)). Notably, this language lacks the GDPR’s “legal or similarly significant” caveat, meaning that the CPRA requirements around access and opt-outs may extend to processing activities such as targeted advertising based on profiling.

Virginia Consumer Data Protection Act (VA-CDPA)

The Virginia Consumer Data Protection Act (VA-CDPA), which passed in 2021 in Virginia and will come into effect in 2023, takes an approach towards automated decision-making inspired by both the GDPR and CPRA. 

First, its definition of “profiling” aligns with that of the GDPR and CPRA (§ 59.1-571). Second, it imposes a responsibility upon data controllers to conduct data protection impact assessments (DPIAs) for high risk profiling activities (§ 59.1-576). Third, it creates a right for individuals to opt out of having their personal data processed for the purpose of profiling in the furtherance of decisions that produce legal or similarly significant effects concerning the consumer (§ 59.1-573(5)).

The FTC Act and broadly applicable consumer protection laws

Finally, a range of federal consumer protection and sectoral laws already apply to many businesses’ uses of automated decision-making systems. The Federal Trade Commission (FTC) enforces long-standing consumer protection laws prohibiting “unfair” and “deceptive” trade practices, including the FTC Act. As recently as April 2021, the FTC warned businesses of the potential for enforcement actions for biased and unfair outcomes in AI, specifically noting that the “sale or use of – for example – racially biased algorithms” would violate Section 5 of the FTC Act.

The FTC also noted its decades of experience enforcing other federal laws that are applicable to certain uses of AI and automated decisions, including the Fair Credit Reporting Act (if an algorithm is used to deny people employment, housing, credit, insurance, or other benefits), and the Equal Credit Opportunity Act (making it “illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance”).

Comparison chart:

ADS Comparison Chart

5. Ensure appropriate timelines for technical and legal compliance, including building capacity and attracting qualified experts.

In general, timelines for government agencies and companies to comply with the law should be appropriate to the complexity of the systems that will be needed to review for impact. Many government offices may not be aware that the systems they use every day to improve throughput, efficiency, and effective program monitoring may constitute “automated decision-making.” For example, organizations using Customer Relations Management (CRM) software from large vendors may be using predictive and profiling systems built into that software. Also, governmental offices suffer from siloed procurement and development strategies and may have built or purchased overlapping ADS to serve specific, sometimes narrow, needs. 

Lack of government funding, modernization, or resources to address the complexity of the systems themselves, and the lack of prior requirements for tracking automated systems in contracts or procurement decisions, means that many agencies will not readily have access to technical information on all systems in use. Automated decision-making systems (ADSs) have been shown to suffer from technological debt, opaque and incomplete technical documentation, or are dependent on smaller automated systems that can only be discovered through careful review of source code and complex information architectures. 

Challenges such as these were highlighted during 2020 as a result of the COVID-19 pandemic, which prompted millions to pursue temporary unemployment benefits. When applications for unemployment benefits surged, some state unemployment agencies discovered that their programs were written in the infrequently used programming language, COBOL. Many resource-strapped agencies were using stop-gap code, intended for temporary use, to translate COBOL into more contemporary coding languages. As a result, many agencies lacked programming experts and capacity to efficiently process the influx of claims. Regulators should ensure that offices have time, personnel, and funding to undertake the digital archaeology necessary to reveal the many layers of ADSs used today. 

Finally, lawmakers should not overlook the challenges of identifying and attracting qualified technical and legal experts. For example, many legislative efforts envision a new or expanded government oversight office with the responsibility to review automated impact assessments. Not only will the personnel needed for these offices need to be able to meaningfully interpret algorithmic impact assessments, they will need to do so in an environment of high sensitivity, publicity, and technological change. As observed in many state and federal bills calling for STEM and AI workforce development, the talent pipeline is limited and legislatures should address the challenges of attracting appropriate talent as a key component of these bills. Likewise, identifying appropriate expectations of performance, including ethical performance, for ADS review staff will take time, resources, and collaboration with new actors, such as the National Society of Professional Engineers, whose code of conduct governs many working in fields responsible for designing or using ADS.

What’s Next for Automated Decision System Regulation?

States are continuing to take up the challenge of regulating these complex and pervasive systems. To ensure that these proposals achieve their intended goals, legislators must address the ongoing issues of definition, scope, audience, timelines and resources, and mitigating unintended consequences. More broadly, legislation should help motivate more challenging public conversations about evaluating the benefits and risks of using ADS as well as the social and community goals for regulating these systems. 

At the highest level, legislatures should bear in mind that ADS are engineered systems or products that are subject to product regulations and ethical standards for those building products. In addition to existing laws and guidance, legislators can consult the norms of engineering ethics, such as the NSPE’s code of ethics, which requires that engineers ensure their products are designed so as to protect as paramount the safety, health and welfare of the public. Stakeholder engagement, including with consumers, technologists, and the academic community, is imperative to ensuring that legislation is effective. 

Additional Materials:

FPF Ethical Data Use Committee will Support Research Relying on Private Sector Data

FPF has launched an independent ethical review committee to provide oversight for research projects that rely upon sharing of corporate data with researchers. Whether researchers are studying the impact of platforms on society, supporting evidence based policymaking, or understanding issues from COVID to climate change, personal data held by companies is increasingly essential to advancing scientific knowledge.

Companies want to be able to cooperate with researchers to use data and machine learning tools to drive innovation and investment, while ensuring compliance with data protection rules and ethical guidelines. To accomplish this, some companies are ramping up their internal ethical knowledge base and staff. However, reviewing high-risk, high-reward analytics projects in-house can be expensive, complex, and may lead to accusations of favoritism or ethics-washing. Traditional academic IRBs may consider the corporate data previously collected for business uses to be out of scope of their review, creating a gap for independent expert ethical review.

Many of the projects that seek to expand human knowledge rely on insights derived from combinations of data and use of machine learning or other advanced data analysis techniques. Sharing data for research drives innovation but it may also create novel risks that must be responsibly considered.

The FPF Ethical Data Use Committee (EDUC) provides companies and their research partners with ethics review as a service. The EDUC will provide an independent expert review of proposed research data uses to help companies limit the risks of unintended outcomes or data-based discrimination. The committee also will help researchers ensure ethical alignment with their uses of secondary data. As part of the review, the committee will provide specific recommendations for companies and researchers to implement that could mitigate the identified risks of individual and group or social harms. These reviews are particularly useful for many uses of data, including for machine-learning based research, models or systems.

The Committee – designed and developed with the generous support of Schmidt Futures and building on previous FPF work funded by the Alfred P Sloan Foundation and the National Science Foundation – will include experts from a range of disciplines, including academic researchers, ethicists, technologists, privacy professionals, lawyers, and others. They will complete training on data protection and privacy, AI and analytics, applied ethics, and other topics in addition to their own expertise, to serve terms on the Committee. Technical specialists will also be tapped for guidance on specific topic areas as required.

At this time, the Ethical Data Use Committee is preparing for final user-preference pilot testing. We are soliciting partners who aspire to be the first to use this system under cost conditions that will not be available once the review committee becomes fully operational. Companies and researchers participating in this final testing phase can do so confidentially, at no cost, if you provide feedback on the process.

If you have a project that you think should be reviewed by the Ethical Data Use Committee or if you would like to recommend yourself or someone else as a member for the inaugural review term, please contact Dr. Sara Jordan at [email protected].

FPF Welcomes New Members to the Youth & Education Privacy Team

We are thrilled to announce two new members of FPF’s Youth & Education Privacy team. The new staff – Joanna Grama and Jim Siegl – will help expand FPF’s technical assistance and training, resource creation and distribution, and state and federal legislative tracking.

You can read more about Joanna and Jim below. Please join us in welcoming them to the team!

Joanna Grama is a Senior Fellow with the Future of Privacy Forum’s Youth and Education team. Joanna will be assisting with various Youth and Education team projects, including the Train-the-Trainer program for higher education.

Joanna has more than 20 years of experience with a strong focus in law, higher education, data privacy, and information security. A former member of the U.S. Department of Homeland Security’s Data Privacy and Integrity Advisory Committee, Joanna is a frequent author and regular speaker on privacy and information security topics. The third edition of her textbook, LEGAL AND PRIVACY ISSUES IN INFORMATION SECURITY, was published in late 2020.

An associate vice president at Vantage Technology Consulting Group, Joanna is also a board member and vice president for the Central Indiana chapter of the Information Systems Audit and Control Association (ISACA); and a member of the International Association for Privacy Professionals (IAPP), the American Bar Association, Section of Science and Technology Law (Information Security Committee), and the Indiana State Bar Association (Written Publications Committee). She has earned the CISSP, CIPT, CRISC, and GSTRT certifications.

Joanna was formerly the Director of Cybersecurity and IT Governance, Risk and Compliance programs at EDUCAUSE. Joanna graduated from the University of Illinois College of Law with honors. Her undergraduate degree is from the University of Minnesota-Twin Cities.

“I have spent my career looking at technology use in higher education through a lens that includes law, policy, information security, and privacy. Joining FPF, and the Youth and Education Privacy team in particular, is a “bucket list” opportunity for me. I am excited to contribute thought leadership around student data privacy issues during a time of great technological change.”

jim siegl headshot 250x250 1Jim Siegl

Jim Siegl, CIPT, is a Senior Technologist with the Youth & Education Privacy team. For nearly two decades prior to joining FPF, Jim was a Technology Architect for the Fairfax County Public School District with a focus on privacy, security, identity management, interoperability, and learning management systems. He was a co-author of the CoSN Privacy Toolkit and the Trusted Learning Environment (TLE) seal program and holds a Master of Science in the Management of Information Technology from the University of Virginia.

“I am excited about joining FPF’s Youth & Education Privacy team during such a unique moment in time for student privacy. I’m looking forward to being a resource to stakeholders as they navigate new and existing student privacy concerns.”

Interested in student privacy? Subscribe to our monthly education privacy newsletter here. Want more info? Check out Student Privacy Compass, the education privacy resource center website.

5 Highlights from FPF’s “AI Out Loud” Expert Panel

On Wed., April 14th, FPF hosted an expert panel discussion on “AI Out Loud: Representation in Data for Voice-Activated Devices, Assistants.” FPF’s Senior Counsel and Director of AI and Ethics, Brenda Leong, moderated the panel featuring Anne Toth, the Director of Alexa Trust, Amazon; Irina Raicu, Internet Ethics Program Director, Markkula Center for Applied Ethics, Santa Clara University, and Susan Gonzales, CEO, AIandYou.

The panel discussed voice-activated systems, such as at home, on mobile devices, and in cars or other commercial applications, to consider how design choices, data collection, and ethics evaluations can affect bias, fairness, and accessibility concerns. This technology offers many benefits and opportunities for quality of life–accessibility for young/aging or disabled populations, convenience, and interactivity across devices and services. But it also carries specific risks including privacy concerns, responsible data management frameworks, legal compliance, and equity and fairness values.

ai out loud webinar

Here are 5 key highlights from “AI Out Loud”: 
  1. Irina Raicu pointed out the need for improvements to design and development processes to ensure inclusiveness, equity, accessibility, and safety for users of these systems. She recommended including all stakeholders to share how these technologies directly impact them. She also pointed out the need for caution on new applications of these systems, such as for emotion detection or medical diagnosis, until the supporting research is strong enough to justify such uses.
  1. Susan Gonzales pointed out that the technology behind these systems still faces significant accuracy challenges. A Stanford study found some error rates almost twice as high for blacks as whites. In general, word error rates, the most common metric for evaluating these systems, show lower accuracy for those with strong accents, speaking a second language, with heavy dialects, and in many cases, across age and gender.
  1. The potential harms caused by inaccuracies can vary with context and use case. While poor song recommendations or inaccurate recipe ingredients are relatively low impact, mistakes for those asking about medication, or relying on voice assistants for access to personal accounts and services might carry greater repercussions. Those most dependent on these systems may also be those most at risk for poor results. Ethical standards demand that reliability be sufficiently high for all users. 
  2. Anne Toth pointed to the significant advances in accuracy and representation in recent years, as more people engage with these devices in a broader variety of contexts. She confirmed Amazon’s commitment to continuous improvement based on the increased, and more diverse, amounts of voice data available, while also prioritizing personal privacy, and personal access and control by users over their data.
  3. To ensure fairness, inclusiveness, and accessibility in designing these technologies, designers and developers must address diversity at all stages from inception to launch. Companies should collaborate with advocacy groups, civil society, and academia to seek outcomes that provide equitable services to all potential users.

Watch the expert panel on FPF’s YouTube Channel and visit our events page for upcoming opportunities. 

U.S. Department of Education Opens an Investigation into Pasco County’s Predictive Policing Program

This post was originally released on studentprivacycompass.org, and can be found here.

On Friday, the U.S. Department of Education opened an investigation into the data-sharing practices between Florida’s Pasco County sheriff’s office and school district. First uncovered in November 2020 by reporting by the Tampa Bay Times, the Department will be investigating the school district’s partnership with the sheriff’s office, which allowed the sheriff to use student grades, attendance, disciplinary records, and aspects of their home life to identify and target students “at-risk” of criminal activity. FPF applauds the Department’s decision to investigate this concerning partnership. Any school data-sharing partnership must value student privacy and build in community trust and transparency—before the Tampa Bay Times story, parents and students in Pasco County were completely unaware of the sheriff’s practices. 

In December 2020, FPF analyzed the sheriff’s public documentation and contract with the school board, concluding that the sheriff’s office unlawfully accessed and used student records for their database in violation of the Family Educational Rights and Privacy Act, FERPA, as well as their contract with the school board. Amelia Vance, FPF’s Director of Youth and Education Privacy, was quoted in the original Tampa Bay Times article revealing the program, noting that

“The law does say school resource officers can access education records because they can be considered ‘school officials.’ But under most circumstances, they can’t share the records with the rest of the department. And they can’t use them in a law enforcement investigation without permission from a parent, unless there is a court order or a health and safety emergency.”

The Department’s announcement follows significant public outcry. Unfortunately, the Department has refused to share the letter during the early stages of its investigation. In January, Representative Bobby Scott (D-VA), Chair of the House Education and Labor Committee, called on the Department to investigate the program for FERPA violations. In his letter Rep. Scott decried the program, noting “this use of student records goes against the letter and spirit of FERPA and risks subjecting students, especially Black and Latino students, to excessive law enforcement interactions and stigmatization.”

FPF Report Outlines Opportunities to Mitigate the Privacy Risks of AR & VR Technologies

A new report from the Future of Privacy Forum (FPF), Augmented Reality + Virtual Reality: Privacy & Autonomy Considerations in Emerging, Immersive Digital Worlds, provides recommendations to address the privacy risks of augmented reality (AR) and virtual reality (VR) technologies. The vast amount of sensitive personal information collected by AR and VR technologies creates serious risks to consumers that could undermine the adoption of these platforms and limit their utility.

fpf ar+vr report socialgraphics fb

“XR technologies are rapidly being adopted by consumers and increasingly being used for work and for education. It’s essential that guidelines are set to ensure privacy and safety while business models are being established,” said FPF CEO Jules Polonetsky. 

The report considers current and future use cases for XR technology, and provides recommendations for how platforms, manufacturers, developers, experience providers, researchers, and policymakers should implement XR responsibly, including: 

“XR technologies provide substantial benefits to individuals and society, with existing and potential future applications across education, gaming, architectural design, healthcare, gaming, and much more,” said FPF Policy Counsel and paper author Jeremy Greenberg. “XR technology systems often rely on biometric identifiers and measurements, real-time location tracking, and precise maps of the physical world. The collection of such sensitive personal information creates privacy risks that must be considered by stakeholders across the XR landscape in order to ensure this immersive technology is implemented responsibly.” 

The release of the report kicks off the start of FPF’s XR Week of activities, happening from April 19thto 23rd. XR Week will explore key elements of the report in greater detail, including the differences between various immersive technologies, their use cases, important privacy and ethical questions surrounding XR technologies, compliance challenges associated with XR technologies, and how XR technology will continue to evolve. 

FPF’s featured XR Week event, AR + VR: Privacy & Autonomy Considerations for Immersive Digital Worlds will include a conversation between FPF Policy Counsel Jeremy Greenberg and Facebook Reality Labs Director of Policy James Hairston, followed by a panel discussion with Magic Leap Senior Vice President Ana Lang, Common Sense Media Director of Platform Accountability and State Advocacy Joe Jerome, and behavioral scientist Jessica Outlaw. 

To register and learn more about FPF’s other XR Week events, read this blog post.

FPF Testifies on Automated Decision System Legislation in California

Last week, on April 8, 2021, FPF’s Dr. Sara Jordan testified before the California House Committee on Privacy and Consumer Protection on AB-13 (Public contracts: automated decision systems). The legislation passed out of committee (9 Ayes, 0 Noes) and was re-referred to the Committee on Appropriations. The bill would regulate state procurement, use, and development of high-risk automated decision systems by requiring prospective contractors to conduct automated decision system impact assessments. 

At the hearing, Dr. Jordan commented as an expert witness alongside Vinhcent Le, who represented The Greenlining Institute. Dr. Jordan commended the sponsors for amending the definition of “automated decisionmaking” to account for the wide range of technical complexity in automated systems. In addition, Dr. Jordan testified that the government contract stage is an appropriate stage for the introduction of algorithmic impact assessments for high-risk applications of automated decisionmaking. This would allow authorities in California to evaluate technology before it is implemented using transparent and actionable assessment criteria.

FPF partners with FCBA -The Tech Bar and LOEB & LOEB to Launch New Law Student Diversity Internship

FPF and The Tech Bar announced the FPF Loeb & Loeb Diversity Pipeline Internship, a first of its kind partnership among three organizations committed to diversity, equity, and inclusion in the legal and policy profession, especially in the technology, media, and telecom (TMT) sector. The inaugural FPF Loeb & Loeb Diversity Pipeline intern will join approximately 20 other law students interning this summer at leading TMT organizations through the FCBA Diversity Pipeline Program

focused student sitting on stairs using a laptop

Currently, in its first year of operation, the Diversity Pipeline Program is an employment program with a legal skills development component that connects first-year law students from historically underrepresented and disadvantaged groups with paid summer legal internship opportunities in the private sector and at non-governmental organizations (NGOs).

“FPF could not be more pleased to host the inaugural FPF Loeb & Loeb Diversity Pipeline Summer Internship,” said John Verdi, FPF’s VP of Policy.  “We are grateful for Loeb’s generous support and the FCBA’s partnership.  We all have a responsibility to create a more inclusive tech policy community; this internship promises to highlight and support the voices of early-career professionals with diverse backgrounds and experiences.” 

Building on the first phase of the Diversity Pipeline Program that focused on private sector internships, we are thrilled to enter this next phase: a groundbreaking partnership with FPF and Loeb & Loeb. “If we truly want to increase diversity in TMT law and policy work, we have to push beyond firms, companies, and associations to ensure that students from historically underrepresented and disadvantaged groups have access to paid internships in the non-profit sector as well. Working with firms that can help support such efforts is a critical step.  This creative partnership will serve as a model for ongoing FCBA initiatives to enable diverse law students to get valuable first-hand experience at researching, analyzing, and formulating policy proposals on the many exciting issues at the cross-section of technology, law, and policy,” said Natalie Roisman, FCBA President. “We are grateful to see the success of the Diversity Pipeline Program in supporting more diversity in the tech space and eager to learn from FPF, an organization with an established TMT law and policy internship program and related alumni network.”

Ken Florin, Chair, Loeb & Loeb, LLP, said, “Loeb is thrilled to be partnering with FCBA—The Tech Bar and FPF by participating in the FCBA Diversity Pipeline Program.  We look forward to the opportunity to work alongside FPF to mentor and support a diverse law student in a summer internship at FPF on legal and policy issues at the intersection of technology and privacy.  We recognize that building diversity into the legal talent pipeline is critical, and we hope this opportunity will support this year’s intern on their path toward a successful legal career.”

The FPF Loeb & Loeb Diversity Pipeline Summer Intern will work on cutting-edge TMT law and policy issues in areas such as consumer privacy, youth privacy, algorithms, and privacy-enhancing technologies.

 “We hope this non-profit/law firm partnership to advance diversity in the TMT is the first of many,” said Rudy Brioche, Diversity Pipeline Committee Co-Chair.  “We welcome the opportunity to work with other non-profits as we expand the program next Fall for the 2022 Summer Internship Program.” 

Join FPF For XR Week: April 19th-23rd, 2021

fpf arvr report socialgraphics fb 2

Adoption of augmented and virtual reality hardware and software technologies – collectively known as extended reality or “XR” – is taking hold among businesses and individuals. If you’d like to engage in the discussion about the ethical and privacy considerations of XR tech, join our XR Week activities April 19th to 23rd

After decades of development, demonstrations, and improvements to hardware and software, immersive technologies are increasingly being implemented in education and training, gaming, multimedia, navigation, and communication. Emerging use cases will let individuals explore complicated moral dilemmas or experience a shared digital overlay of the physical world in real time. But XR technologies typically cannot function without collecting sensitive personal information – data that can create privacy risks. 

FPF’s XR Week will explore key privacy and ethical questions surrounding augmented reality (AR), virtual reality (VR), and related immersive technologies. The week will feature several events, including a roundtable discussion with expert participants and several conversations hosted in virtual reality. 

April 19th, 1:00 – 1:20PM EDT: Reel Virtuality

To kick off XR Week, FPF Policy Counsel and lead on XR technology Jeremy Greenberg and FPF Vice President of Policy John Verdi will discuss a report, Augmented Reality + Virtual Reality: Privacy & Autonomy Considerations in Emerging, Immersive Digital Worlds, to be released on the same day. Greenberg and Verdi will discuss the differences between various immersive technologies, primary use cases, and key privacy and ethical questions. The conversation, originally recorded in Real VR Fishing, can be viewed in 2-D on LinkedIn Live – register for the event on LinkedIn to receive a notification when it begins. 

April 21st, 2:00 – 3:30PM EDT: AR + VR: Privacy & Autonomy Considerations for Immersive Digital Worlds

Our featured XR Week event, AR + VR: Privacy & Autonomy Considerations for Immersive Digital Worlds, will include a conversation between FPF Policy Counsel and lead on XR technology Jeremy Greenberg, and Facebook Reality Labs Director of Policy James Hairston. A panel, moderated by Greenberg, will discuss the recorded conversation. Panelists will include: 

Register here.

April 22nd, 1:00 – 1:10PM EDT: Sculpting XR Compliance

On the Thursday of XR Week, Greenberg and BakerHostetler Data Protection Attorney Carolina Alonso will discuss the legal compliance challenges associated with XR technologies. The conversation, originally recorded in SculptrVR, can be viewed in 2-D on LinkedIn Live – register for the event on LinkedIn

We hope you’ll join us!

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

mount fuji 3801827 1920

Authors: Takeshige Sugimoto, Akihiro Kawashima, Tobyn Aaron from S&K Brussels LPC; Authors can be contacted at [email protected].

The recent amendments to Japan’s data protection law (the Act on the Protection of Personal Information, henceforth the ‘APPI‘) contain a number of new provisions certain to alter – and for many foreign businesses, transform – the ways in which companies conduct business in or with Japan. In addition to greatly expanding data subject rights, most notably, the amendments to the APPI (the ‘2020 Amendments‘): 

(i) eliminate all former restrictions on the APPI’s extraterritorial application; 

(ii) considerably heighten companies’ disclosure and due diligence obligations with respect to overseas data transfers; 

(iii) introduce previously unregulated categories of personal information (each with corresponding obligations for companies), including ‘pseudonymously processed information’ and ‘personally referable information’; and 

(iv) for the first time, mandate notifications for qualifying data breaches.

The 2020 Amendments will be enforced by the Personal Information Protection Commission of Japan (the “PPC”), pursuant to forthcoming PPC guidelines alongside the amended Enforcement Rules for the Act on the Protection of Personal Information (the ‘amended PPC Rules‘) and the amended Cabinet Order to Enforce the Act on the Protection of Personal Information (the ‘amended Cabinet Order‘) (both published on March 24, 2021).

As the 2020 Amendments are set to enter into force on April 1, 2022, Japanese and global companies that conduct business in or with Japan, have just less than one year to bring their operations into compliance. To facilitate such efforts, this blog post describes those provisions of the 2020 Amendments likely to have the greatest impact on businesses, as well as current events in Japan which will affect their implementation and should inform the manner by which companies address enforcement risks and compliance priorities.

1. LINE Data Transfers to China: A Wake-Up Call for Japan

To appreciate the effect that the 2020 Amendments will have on the Japanese data protection space, one must first consider the current political and societal contexts in Japan in which the 2020 Amendments will be introduced – and enforced – beginning with a recent incident of note involving LINE Corporation. 

In March 2021, headlines across Japan shocked locals: Japan-based messaging app LINE, actively used and trusted by approximately 86 million Japanese citizens, had been transferring users’ personal information, including names, IDs and phone numbers, to a Chinese affiliate. It is neither unusual nor unlawful for Japanese tech companies to outsource certain of their operations, including personal information processing, overseas. But for Japanese nationals, the LINE matter is different for a number of important reasons, not least of which is the Japanese population’s awareness of the Chinese Government’s broad access rights to personal data managed by private-sector companies in China, pursuant to China’s National Intelligence Law.

LINE is not only the most utilized messaging application in Japan; it also occupies a special place in the country’s historical and cultural consciousness. When Japan was hit by the 2011 earthquake, use of voice networks failed and email exchanges were delayed, as citizens struggled to communicate with, and confirm the safety of, their loved ones. And so, LINE was born – a simple messaging and online calling tool to serve as a communications hotline in case of emergency. A decade on, LINE has become the major – and for many the only – means of communication in Japan – particularly in today’s socially-distanced world.

For the Japanese Government too, LINE serves a crucial role: national – and municipality – level government bodies use LINE for official communications, including of sensitive personal information such as for COVID-19 health data surveying. News of LINE’s transfer of user data to China, including potential access by the Chinese Government, therefore horrified private citizens and public officials both.

On March 31, 2021, the PPC launched an official investigation into LINE and its parent company, Z Holdings, over their management of personal information. Until such investigation is concluded, whether and to what extent LINE violated the APPI (and in particular, its provisions governing third party access and international transfers) will remain uncertain. Regardless, the impact of this matter on the Japanese data privacy space is already unfolding. In late March, a number of high-ranking Japanese politicians (including Mr. Akira Amari, Chairperson of the Rule-Making Strategy Representative Coalition of the Liberal Democratic Party of Japan) sent the PPC and other relevant Government ministries strongly-worded messages urging immediate action with respect to LINE, and more broadly, calling for a risk assessment to be conducted vis-à-vis all personal information transfers to China by companies in Japan.

Several days later, Japanese media reported that the PPC had requested members of both the KEIDANREN (the Japan Business Federation, comprised of 1,444 representative companies in Japan) and the Japan Association of New Economy (comprised of 534 member companies in Japan), to report their personal information transfer practices involving China, and to detail the privacy protection measures in place with respect to such transfers. For any APPI violations revealed, the PPC will issue a recommendation potentially followed with an injunctive order, the latter of which carries a criminal penalty (including possible imprisonment) if not implemented.

Importantly, recent political support for stronger data protection measures extends beyond transfers to China. For instance, Mr. Amari has also reportedly called on the PPC to broadly limit permissible overseas transfers of personal information to those countries with data protection standards equivalent to the APPI (a limitation which, if implemented, would greatly surpass restrictions on transfer under both the current APPI and the 2020 Amendments).

Although the PPC has yet to respond, it is evident that both political and popular sentiment in Japan strongly favor enhanced protections for Japanese persons’ personal information. The inevitable outcome of such sentiment, which may be further amplified depending on the PPC’s forthcoming conclusions regarding the LINE matter, will be the increasingly stringent enforcement of the APPI and its 2020 Amendments, and potentially, further amendments thereto. As recent events in Japan demonstrate, this transformation has already begun to take effect. Companies conducting business in or with Japan, whether Japanese or foreign, should therefore pay close attention to the Japanese data privacy space over the course of this year.

2. Broadened Extraterritorial Reach and International Transfer Restrictions

For ‘Personal Information Handling Business Operators’ (henceforth ‘Operators‘, a term used in joint reference to controllers and processors, upon which the APPI imposes the same obligations) arguably the greatest impact of the 2020 Amendments will derive from their drastic revisions to Article 75 (extraterritoriality) and Article 24 (international transfer).

To date, the APPI’s extraterritorial reach has been limited to a handful of its articles, primarily those governing purpose limitation and lawful acquisition of personal information (‘PI‘) by overseas Operators. From April 2022, however, Article 75 of the amended APPI will, without exception, fully bind all private-sector overseas entities, regardless of their size, which process the PI, pseudonymously processed PI or anonymously processed PI of individuals who are in Japan, in connection with supplying goods or services thereto.

With respect to international transfers, Article 24 of the current legislation prohibits the transfer of PI to a ‘third party’ outside of Japan absent the data subject’s prior consent, unless (i) the recipient country has been white-listed by the PPC or (ii) the recipient third party upholds data protection standards equivalent to the APPI (in practice, these would generally be imposed contractually). Otherwise, international transfers may also be conducted pursuant to legal obligation or necessity (for the protection of human life, public interest or governmental cooperation, provided that for each, the data subject’s consent would be difficult to obtain). The APPI’s international transfer mechanisms generally conform to those prescribed by other global data protection regimes, loosely resembling the EU GDPR’s adequacy decisions (with respect to (i) above), and standard contractual clauses or binding corporate rules (with respect to (ii) above, although there are no PPC-provided contractual clauses, and non-binding arrangements such as the APEC CPBR System are PPC-approved).

The 2020 Amendments and amended PPC Rules do not modify the above transfer mechanisms, but they do narrow their scope in two key aspects. First, pursuant to Article 24(2) of the 2020 Amendments, transfers conducted on the basis of data subject consent will henceforth require the transferring Operator (on top of preexisting notification obligations) to inform the data subject in advance as to the name of the recipient country, and the levels of PI protection provided by both that country (assessed using an “appropriate and reasonable method”) and the recipient third party. Absent such information, data subject consent will be rendered uninformed and the transfer, invalid.

Of greater impact on the transferring Operator, however, will be the second modification (pursuant to Article 24(3) of the 2020 Amendments): in the event that an international transfer is conducted in reliance on contractually or otherwise imposed APPI data protection standards (the primary transfer mechanism on which Operators in Japan rely), such contractual safeguards alone are to be rendered insufficient. Going forward, the transferring Operator must, in addition to imposing APPI-equivalent obligations upon a recipient third party, (i) take “necessary action to ensure continuous implementation” of such obligations by the recipient; and (ii) inform the data subject, upon request, regarding the actions the Operator has taken.

With respect to (i) above, the amended PPC Rules interpret “necessary action to ensure continuous implementation” as requiring the transferring Operator to: (1) periodically check the implementation status and content of the APPI-equivalent measures by the recipient third party, and assess (by an “appropriate and reasonable method”) the existence of any foreign laws which might impact such implementation; (2) take necessary and appropriate actions to remedy any obstacles that are found; and (3) suspend all PI transfer to the third-party recipient, should its continuous implementation of the APPI-equivalent measures become difficult.

In addition, following receipt of a data subject’s request for information (pursuant to (ii) above), the amended PPC Rules specify that the transferring Operator must, without undue delay, inform the requesting data subject of each of the following:

(1)  the manner by which the APPI-equivalent measures were established by (or presumably with) the recipient third party (such as a data processing agreement or memorandum of understanding, or in the case of inter-group transfers, a privacy policy);

(2)  details of the APPI-equivalent measures implemented by the recipient third party;

(3)  the frequency and method by which the transferring Operator checked such implementation;

(4)  the name of the recipient country;

(5)  whether any foreign laws may affect the implementation of the APPI-equivalent measures, and a detailed overview of such laws;

(6)  whether any obstacles to implementation exist, and a detailed overview of such obstacles; and

(7)  the measures taken by the transferring Operator upon a finding of such obstacles.

Only if provision of the above items to the data subject is likely to ‘significantly hinder’ an Operator’s business operations, might that Operator refrain from such (complete or partial) disclosure.

In practice, Operators primarily rely upon contractual safeguards and consent (in that order) to transfer PI outside of Japan. Indeed, the PPC’s list of “adequacy decisions” on which transferring Operators may alternatively rely is significantly shorter than that of the European Commission: to date, only the UK and EEA members have been deemed adequate recipients of a PI transfer from Japan. Therefore, the onerous informational and due diligence obligations incumbent upon Operators from April 2022, which affect precisely these two transfer mechanisms, are certain to impact business operations in Japan. And, given the 2020 Amendments’ unbridled extraterritoriality, this burden will be equally felt overseas. Most importantly, in the wake of the March 2021 LINE matter, compliance with the current and amended APPI, and in particular its overseas transfers restrictions, will be at the top of the PPC’s enforcement priorities.

3. Mandatory Data Breach Notifications

In addition to expanding the types of security incidents subject to the amended APPI, more notably, data breach notifications will henceforth be mandatory (in contrast, data breach notifications are subject to ‘best efforts’ under current legislation). Going forward, Operators will be required – pursuant to Article 22-2 of the 2020 Amendments and the amended PPC Rules – to promptly notify both the PPC and data subjects of the occurrence and/or potential occurrence of any data leakage, loss, damage or other similar situation which poses a ‘high’ risk to the rights and interests of data subjects (henceforth, a ‘breach‘).

The types of breaches which meet this ‘high’ risk threshold, and thus trigger a notification obligation, are described by the amended PPC Rules as those which involve, or potentially involve, any of the following: (i) sensitive (‘special care-required’) PI; (ii) financial injury caused by unauthorized usage; (iii) a wrongful purpose(s) as the cause; or (iv) greater than 1,000 affected data subjects. However, a notification is not required in the event that the Operator implemented ‘necessary measures’ to safeguard the rights and interests of data subjects (such as sophisticated encryption).

The amended PPC Rules also stipulate the required content for such notifications, although Operators are granted thirty days to provide details unknown at the time of the initial notice:

(1) overview of the breach;

(2) the types of PI affected or possibly affected by the breach;

(3) the number of data subjects affected or possibly affected by the breach;

(4) causes of the breach;

(5) existence and nature of secondary damage or risks thereof;

(6) status and nature of communications to affected data subjects;

(7) whether and how the breach has been publicized;

(8) measures implemented to prevent a recurrence; and

(9) any additional matters which may serve as a useful reference.

For those Operators ‘entrusted‘ by another Operator with the processing of PI, the 2020 Amendments provide a second option: in lieu of notifying the PPC and data subjects, such “entrusted” Operators may instead alert the “entrusting” Operator as to the breach. In practice, this likely equates to the EU GDPR’s requirement for processors to notify controllers in the event of a breach (although under the 2020 Amendments, direct accountability to the PPC and data subjects is still the default, including for “entrusted” Operators).

In the event of a breach, amended Article 30(5) additionally confers upon data subjects the right to request deletion, suspension of use and suspension of transfer, of affected PI.

4. Expansion of ‘Personal Information’ Concepts and Categories

Another major modification to the APPI is the expanded scope of the types of PI covered. In addition to eliminating the APPI’s differential treatment of temporary PI (retained for up to six months), the 2020 Amendments introduce a new category of information, ‘pseudonymously processed information‘, thereby bringing the Japanese data protection regime one additional step closer to the EU GDPR framework.

As currently drafted, the APPI recognizes only two major types of information: PI and anonymously processed information. Notably, the method of rendering anonymously processed information under the APPI – in contrast to the EU GDPR– need not be technically irreversible (unless such data originates in the UK or EEA and the transfer is based on the European Commission’s adequacy decision on Japan, in which case special PPC-drafted Supplementary Rules do require irreversibility); instead, the APPI endeavors to preserve anonymity by requiring Operators to implement appropriate security measures to prevent reidentification.

Pseudonymously processed information is defined by the 2020 Amendments as information relating to an individual, which cannot identify such individual unless collated with additional information. The stated intention behind the drafters’ introduction of the pseudonymization process is to enable Operators to (i) utilize pseudonymously processed information for internal purposes including business analytics, the development of computational models, etc., and/or (ii) retain rather than delete, for potential future statistical analysis usage, pseudonymously processed information derived from PI which are no longer necessary for the original purpose(s) for which they were collected.

The 2020 Amendments and amended PPC Rules model the pseudonymization process on anonymization, requiring the removal of any (i) description, (ii) unique ‘personal identification code’ (as defined in the APPI), and (iii) information relating to the processing method performed to enable the removal of (i) and (ii) above. The immediate result is the creation, by separation, of two types of information: pseudonymously processed information and ‘removed’ PI, where the latter is the ‘key’ enabling reidentification.

The removed PI are treated as PI under the 2020 Amendments, and as such are subject to all of the same requirements and restrictions, although Operators in possession of both removed PI and pseudonymously processed information are additionally obligated to provide enhanced security in order to safeguard the integrity of the pseudonymously processed information (pursuant to the amended PPC Rules and amended Article 35-2(2)).

Notably, and in divergence from the EU GDPR approach to pseudonymously processed information, the 2020 Amendments’ rules governing treatment of such information vary according to the Operator involved. With respect to pseudonymously processed information handled by an Operator in simultaneous possession of the removed (and separately handled) PI, amended Article 35-2 stipulates the following specific requirements:

(i)         a prohibition of the collation of such information with other data, such as the removed PI, in a manner which could identify data subjects;

(ii)       strict application of the principles of purpose limitation and necessity thereto;

(iii)     a prohibition on usage of any contact information contained therein to phone, mail, email or otherwise contact data subjects;

(iv)      a prohibition of any transfer thereof to third parties (excluding, amongst others, “entrusted” Operators pursuant to Article 23(5)), unless such transfer is permitted by law or regulation (alternatively, the transfer of pseudonymously processed information by data subject consent is permissible if such information are instead handled as PI);

(v)       in the event of their acquisition or the intended alteration of their processing purpose, limitation of the Operator’s disclosure obligation to that of notice by publication;

(vi)      non-applicability of breach notification obligations pursuant to amended Article 22-2, provided that the removed PI are not also subject to the breach; and

(vii)    the elimination of data subjects’ rights regarding their pseudonymously processed information, with the exception of their Article 35 right to receive a prompt and appropriate response to their complaints (subject to the Operator’s best efforts).

In addition to the above, the APPI’s ‘general’ requirements pursuant to Articles 19-22 will apply to pseudonymously processed information handled by an Operator which simultaneously (but separately) possesses the removed PI. Such Operator will be required to:

(i) maintain accuracy of the pseudonymously processed information (for the duration their utilization remains necessary, after which their immediate deletion – alongside the deletion of the removed PI – is required, subject to the Operator’s best efforts);

(ii) implement necessary and appropriate security measures to prevent leakage, loss or damage of the pseudonymously processed information; and

(iii) exercise necessary and appropriate supervision over employees and entrusted persons handling the pseudonymously processed information.  

In contrast, with respect to pseudonymously processed information handed by an Operator which does not simultaneously possess the removed PI, amended Article 35-3 prohibits such Operator from acquiring the removed PI and/or collating the pseudonymously processed information with other information in order to identify data subjects, and limits the applicable provisions of the 2020 Amendments to the following:

(i) the implementation of necessary and appropriate security measures to prevent leakage (a simplified version of Article 20);

(ii) the exercise of necessary and appropriate supervision over employees and entrusted persons handling such information (pursuant to Articles 21 and 22);

(iii) a prohibition on usage of any contact information contained in the pseudonymously processed information to phone, mail, email or otherwise contact data subjects;

(iv) a prohibition of any transfer of such information to third parties (excluding, amongst others, “entrusted” Operators pursuant to Article 23(5)), unless such transfer is permitted by law or regulation (alternatively, the transfer of pseudonymously processed information by data subject consent is permissible if such information are instead handled as PI); and

(v) the elimination of data subjects’ rights regarding their pseudonymously processed information, with the exception of their Article 35 right to receive a prompt and appropriate response to their complaints (subject to the Operator’s best efforts).

In addition to pseudonymously processed information, the 2020 Amendments, pursuant to Article 26-2, introduce an additional, fourth category of information – namely,personally referable information’. This fourth category includes cookies and purchase history (for example), which items may not independently be linkable to a specific individual (and thus would not constitute PI) but which could, if transferred to an Operator in possession of additional, related data, become PI. To account for such qualifying transfers, the 2020 Amendments introduce a consent requirement (such as an opt-in cookie banner).

In the case of overseas transfers, the transferring Operator must additionally inform the data subject as to the data protection system and safeguards of the recipient country and third party, as well as take ‘necessary action to ensure continuous implementation’ of APPI-equivalent safeguards by such recipient third party. Unlike for PI, the data subject does not have a right to request additional details regarding the ‘necessary action’ taken by the Operator with respect to an overseas transfer of personally referable information.

5. Preparing for the 2020 Amendments: Next Steps for Japanese and Foreign Operators

Companies conducting business in or with Japan should be mindful of the demanding nature of the 2020 Amendments to the APPI, and the stringency with which the PPC will seek to enforce them – particularly in view of the dismay caused by the LINE matter and the likelihood of efforts by the PPC to avoid similar incidents in the future.

Moreover, as the European Commission finalizes its first review of its 2019 adequacy decision on Japan, the PPC’s interpretative rules and enforcement trends may further intensify, with the aim of bringing Japanese data protection legislation closer to global standards, including the EU GDPR framework. Bearing this in mind, companies – including those not currently subject to the APPI, but which provide goods and/or services to individuals in Japan – would be wise to proactively conduct necessary modifications to their internal data protection policies and mechanisms, in order to ensure operational compliance with the amended APPI by April 2022.

For those Operators involved in international transfers of PI from Japan, absence of a PPC-issued “standard contractual clauses” template renders difficult, and from a compliance standpoint uncertain, any reliance on contractually-imposed APPI-equivalent standards pursuant to amended Article 24(3). However, one potential solution for Operators preparing to rely on this transfer mechanism for overseas PI transfers (excluding to the EEA or UK) may be the European Commission’s revised Standard Contractual Clauses (‘New SCCs‘), which are due to be published in early 2021. Subject to certain necessary modifications (of jurisdictional clauses and so forth), Operators may consider utilizing the New SCCs as a starting point, to bind recipient third parties to the stringent data protection standards and obligations of the 2020 Amendments.

Operators engaged in transferring PI should also be mindful of the 2020 Amendments’ onerous due diligence obligations with respect to overseas third parties. Prior to and during any cross-border engagements involving Japan-origin PI, Operators must actively ensure that their third-party recipients of such PI (including partners, vendors and subcontractors, as well as each of their respective partner, vendor and subcontractor recipients, and so forth) successfully implement, and continuously maintain, APPI-equivalent measures.

The 2020 Amendments’ enhanced disclosure obligations invite data subjects to hold Operators accountable with respect to the preventative and/or reactive measures Operators take – or fail to take – to protect their PI. Operators engaging foreign third parties should therefore consider reviewing and amplifying their due diligence of such entities, in addition to assessing the laws in each recipient country, in order to proactively identify and devise solutions to address potential obstacles to APPI adherence overseas.

The 2020 Amendments’ broadened extraterritorial application will also require non-Japanese companies to modify their internal data breach assessment and notification systems, to ensure that the PPC and data subjects in Japan are appropriately notified in the event of a qualifying breach; and to implement any necessary changes to their data subject communications platforms or data subject rights request forms, to enable data subjects in Japan to successfully exercise their amended APPI rights from April 1, 2022.

Once published, the PPC guidelines to the 2020 Amendments will further clarify (and potentially amplify) Operators’ compliance obligations with respect to each of the topics addressed in this blog post. The PPC’s findings in regard to LINE’s conduct may also have significant bearing on future APPI enforcement trends and risks. Therefore, in addition to implementing necessary measures to ensure operational compliance with the 2020 Amendments, companies processing covered PI and interested data privacy professionals should look out for these items over the next several months.   

Photo Credit: Ben Thai from Pixabay

For more Global Privacy thought leadership, see:

The right to be forgotten is not compatible with the Brazilian Constitution. Or is it?

India: Massive overhaul of digital regulation, with strict rules for take-down of illegal content and automated scanning of online content

Russia: New law requires express consent for making personal data available to the public and for any subsequent dissemination