FPF Releases Follow-Up Report on Consumer Genetics Companies and Practice of Transparency Reporting

Screen Shot 2020 07 30 At 10.09.43 Am 1200x545

Today, the Future of Privacy Forum (FPF) published “Consumer Genetic Testing Companies & The Role of Transparency Reports in Revealing Government Requests for User Data,” examining how leading consumer genetic testing companies require valid legal processes before disclosing consumer genetic information to the government, as well as how companies publish transparency reports around such disclosures. The report analyzes transparency reports from leading consumer genetic testing companies to draw conclusions about the frequency of law enforcement requests, the type of information requested, the frequency with which companies complied with law enforcement requests, and how consumer genetic testing companies’ transparency practices differ from other related industries.

The report builds on FPF’s 2018 “Privacy Best Practices for Consumer Genetic Testing Services,” which acknowledges that government agencies may have legitimate reasons to request access to consumer genetic data from time to time, but requires that companies only provide access to such information when required by law. The new report will be helpful in conveying the benefits of transparency reporting, the different avenues for law enforcement requests, and the types of law enforcement requests issued to consumer genetics companies.

“Genetic data can help law enforcement agencies solve crimes and vindicate wrongfully accused individuals, but unfettered government access to genetic information held by commercial services presents substantial privacy risks” says FPF Vice President of Policy John Verdi. “Transparency reporting plays a key role in mitigating these risks by shedding light on how consumer DNA testing services respond to government access requests for both genetic information and non-genetic account information.”

The report articulates five main conclusions:

To learn more, read the report.

FPF & VUB Roundtable – Banking on Personal Data

Authors: Caroline Hopland, Ine van Zeeland, and Rob van Eijk


FPF & Vrije Universiteit Brussel: Banking on Personal Data

Last week, Future of Privacy Forum (FPF) and Vrije Universiteit Brussel (VUB) hosted a roundtable on open banking. Twenty subject-matter experts discussed some of the main challenges in banking and data protection law: consent and its various interpretations, (re)use and sharing of personal data, and interoperability. The European Data Protection Board (EDPB) has begun to address whether the unresolved questions can be answered. Aspects of these issues were highlighted in the roundtable.

In terms of next steps, FPF and VUB will turn to engaging and understanding how the EDPB has set out the path forward with another round of discussions in mid-September. The next question is: to what extent can insightful use cases from other parts of the world serve as an inspiration to European banks, regulators, and watchdog groups?

 

Open Banking: EDPB guidance

The EDPB released, on July 17, 2020, their Guidelines 06/2020 on the interplay of the Second Payment Services Directive (PSD2) and the General Data Protection Regulation (GDPR) – version for public consultation. While transaction information is increasingly digitized and banks are announcing data sharing agreements with ever more external parties, the European Commission plans to announce a new digital finance strategy, in a continued effort to ‘open up’ banking. Experts in the industry, however, warn that there still are many unresolved questions when it comes to banks sharing data with third parties.

To consumers, the situation appears complex. They tend to hold banks accountable whenever something goes wrong, as these are the familiar parties to them. Roundtable participants from South Africa and Israel confirmed this was the case in their countries as well: criticism for malpractices at fintechs is still directed at banks. Government authorities or regulators could perhaps play a role in improving trust among the wider public. For instance, in the UK there is a dispute management system for disagreements between banks and third parties; this could be offered to consumers as well. As it is, consumers whose rights have been breached can only go to court, which is a high threshold for many. When it comes to data breaches, it would also be in the interest of fintechs to have clarity on liabilities in order to take out adequate insurance.

 

The difference between consent under PSD2 & the GDPR

The first presenter discussed the difference between consent under PSD2, the European directive regarding digital payment services, and the GDPR, in various banking situations.

They explained that PSD2 sets out rules concerning:

– Strict security requirements for electronic payments and the protection of financial data of Payment Service User (PSU), guaranteeing safe authentication and reducing the risk of fraud;

– The transparency of conditions and information requirements for payment services; and

– The rights and obligations of PSUs and providers of payment services.

They stated that the GDPR, however, generally focuses on strengthening the rights of the data subjects and sanctioning any violation of data subject rights. Both PSD2 and the GDPR set standards with respect to the safe-keeping of personal data and information provision to PSU (respectively, data subjects under the GDPR, when the PSUs are individuals). A Third Party Provider (TPP) must comply with both PSD2 and GDPR but, where one regulation is silent and the other regulation restricts processing of personal data, the protection of personal data shall prevail, (e.g., for the processing of special categories of data, apply Article 9 of the GDPR). Additionally, the processing of sensitive personal data, like health data, is prohibited unless one of the exemption clauses set out in Article 9, section 2 of the GDPR applies.

As for consent, the ‘explicit consent’ prongs within Article 94 of the PSD2 should be seen as an additional safeguard to ensure the consumer fully consents and understands the contract. However, PSD2 consent for the parameters of the interaction with the financial institution and GDPR consent for personal data processing should not be conflated. The presenter also touched upon Germany’s pragmatic approach, where consent of a PSU under Article 64 of the PSD2 (‘authorization’) also includes the explicit consent for data processing.

The presenter described the roles and responsibilities of a TPP & an Account Servicing Payment Service Provider (ASPSP) under PSD2 and GDPR with respect to their responsibility for safeguarding personal data during data transfer. The ASPSP must comply with Articles 66(1), (4), 67(1), (3) of the PSD2, and transfer of client data is justified according to Article 6 (1)(c) of the GDPR (providing a legal obligation). Once the TPP obtains access to a consumer’s data, it assumes its own responsibility with respect to processing personal data. If an interface from a bank complies with the Regulatory Technical Standards (RTS) on Strong Customer Authentication and Common and Secure Communication, then the data flow will be secured. The TPP must then explore any technical solutions in order to exclude certain categories of data that are not needed. This is an open issue right now and has not yet been solved.

The first presenter mentioned: ‘Consent requires that information is clearly provided to the data subject. The data subject should know to whom the data flow is going, and to whom the data is being disclosed to. This should be as precise as possible. This is why we are trying to precisely describe the purpose, parties, and recipients of the data flow. Bottom line: Consent under PSD2 might be totally valid, but not valid under the GDPR.’

 

Data (Re)use

The second presenter further touched upon these issues and spoke about what they mean in practice. The second presenter said that, ‘the idea behind PSD2 is that it should allow for competitiveness in the market.’

They discussed the purpose limitation requirements under PSD2 and the GDPR. Articles 66 & 67 of the PSD2 specifies the purposes for which the data is processed, but the GDPR, by the rights afforded to data subjects, causes a bit of disparity. This is shown in the EDPB’s Guidelines, and therefore creates a dual approach depending on whether the PSU is a legal entity or not. Articles 5(1)(b) of the GDPR, and Articles 66(3)(g) and 67(2)(f) of the PSD2 set out the specified purposes for processing and are almost identical. PSD2 creates the first element of the GDPR’s ‘purpose limitation’ principle and specifies the purposes for which the data is processed.

They stated that the specified purposes in Article 66(2)(f) of the PSD2 limits the Payment Initiation Service Provider’s (PISP) processing to payment initiation services to:

– Initiate the payment order

– Instruction from PSU + ‘consent’

– Instruction to Account Servicing Payment Service Provider (ASPSP)

– Receive authorization from ASPSP;

While Article 66(3)(g) of the PSD2 limits the Account Information Service Provider’s (AISP) processing to account information services to:

Instruction from PSU + ‘consent’

– Access payment accounts held by PSU

– Receive authorization from the ASPSP

– Receive account information from ASPSP

– Display account information back to PSU.

The presenter explained that payment data transactions are not considered personal data if the data relates to a Corporate PSU, whereas such data would amount to personal data for individual PSUs.

Moreover, they described the EDPB’s Guidelines on PSD2 relating to further processing, and explained that:

– Compatibility is unavailable under Article 6(4) of the GDPR (paragraph 22 of the Guidelines). The relevant lawful basis for processing under the PSD2 is performance of a contract, for which the ‘necessary’ requirement has been restrictively interpreted (see EDPB Guidelines 2/2019).

– Further processing of payments data can only be based on EU or Member State law or the data subject’s consent. The Guidelines note that relevant Member State law includes counter terrorism and anti-money laundering (paragraph 23 of the Guidelines) but leaves a question mark as to whether fraud prevention and detection activities are captured (paragraph 70 of the Guidelines alludes to this, in combination with Article 94(1) of the PSD2 but this was not explicit in the Guidelines).

In practice, this means that: 1) PSD2 is not lex specialis; the two regimes operate together; 2) this is a dual regime under PSD2; and for a corporate PSU, the only permitted processing appears to be restricted by Articles 66 and 67 of the PSD2, with no recourse to obtain consent under GDPR for corporate PSUs; 3) there are no compatible purposes under Article 6(4) of the GDPR; 4) we must be mindful of EU and Member State Law; and 5) the role of consent, especially consent that meets the requirements of the GDPR, has a very important role in determining how ancillary services can be provided under PSD2 to individual PSUs (i.e. data subjects).

The second presenter stated: ‘If the fulfilment of the account information or payment initiation service falls under performance of the contract,  then the TPP ought to also get the consent of the data subject to see if they can share the data with third parties so that they can offer them services unless this can be included in the contract; however, this will need careful consideration in line with the EDPB’s Guidelines 2/2019.’

The second presenter expressed that: ‘If, for example, PSD2 deals with consolidation of various accounts for purposes of analytics as an account information service, then the account information service ends because the consolidation has taken place. If the purpose is different and not previously covered in the contract – assuming it can be covered in the contract and that such additional services are necessary – the TPP must obtain consent from the individual PSU to share the PSU’s data with recipients for the further purposes of X, Y, Z. However, an ecosystem based on consent is unlikely to be viable in practice. Further clarity from the EDPB is required owing to the inherent conflict between what the Guidelines suggest and what PSD2 was designed to achieve.’

 

Interoperability

The third presenter focused on the Financial Data Exchange (FDX) model. FDX has the potential to be an alternative model to the open banking model that has been gaining momentum in, e.g., the UK and Australia. FDX is user centric by design and creates a consent receipt where it can trace users’ data through a data transferring ecosystem (e.g., from the bank to the permission party). FDX focuses on control, access, transparency, traceability, security, and data minimization, and ensures that consumers have full control of their data. By creating a fully traceable ecosystem with consent receipts, users will be able to identify the liable party if there is a breach of their data.

 

To learn more about FPF in Europe, please visit fpf.org/eu.

 

11th Annual Privacy Papers for Policymakers — Call for Nominations

The Future of Privacy Forum (FPF) invites privacy scholars and authors with an interest in privacy issues to nominate finished papers for consideration for FPF’s annual Privacy Papers for Policymakers Award.

PURPOSE

DEADLINE

SUBMISSION REQUIREMENTS:

REVIEW PROCESS

AWARDS EVENT

The Future of Privacy Forum will invite winning authors to present their work at an annual event with top policymakers and privacy leaders in the Spring of 2021 (date TBD). FPF will also publish a digest of the summaries of the winning papers for distribution to policymakers in the U.S. and abroad.

California SB 980 Would Codify Many of FPF’s Best Practices for Consumer Genetic Testing Services, but Key Differences Remain

Authors: John Verdi (Vice President of Policy) and Katelyn Ringrose (Christopher Wolf Diversity Law Fellow) 


In July 2018, the Future of Privacy Forum released Privacy Best Practices for Consumer Genetic Testing Services. FPF developed the Best Practices following consultation with technical experts, regulators, leading consumer genetic and personal genomic testing companies, and civil society. The FPF Best Practices include strict standards for the use and sharing of genetic information generated in the consumer context. Companies that pledged to follow the guidelines, including Ancestry, 23andMe, and Helix promised to: 

California lawmakers are currently considering SB 980 (the “Genetic Information Privacy Act”). SB 980 would establish obligations for direct-to-consumer genetic testing companies and others that collect or process genetic information. If passed by the legislature and approved by the Governor, the bill would become effective on January 1, 2021. 

Many of SB 980’s provisions align closely with FPF’s Best Practices, including the bill’s emphasis on consumers’ rights to notice, choice, and transparency. Leading direct-to-consumer genetic testing companies are already obliged to follow the Best Practices, as they have made public commitments that are enforceable by the Federal Trade Commission and state Attorneys General. SB 980’s provisions would extend these requirements to all covered entities that do business in California. 

Some of SB 980’s provisions diverge from the FPF Best Practices. For example, FPF’s Best Practices and SB 980 both require companies to obtain opt-in consent before they use DNA test results for marketing, but proposed amendments to SB 980 would further require companies to provide consumers with an opportunity to opt out of contextual marketing – ads placed on web pages and apps based on page content rather than sensitive personal information. SB 980’s treatment of contextual advertising is also inconsistent with the California Privacy Rights Act of 2020 (CPRA) – the comprehensive privacy ballot initiative that would govern the use of much sensitive health data and would not require companies to provide an opt-out for non-personalized, contextual advertising. In addition, SB 980 diverges from FPF’s Best Practices regarding government access to DNA information, with SB 980 preserving an option for companies to voluntarily provide genetic data to law enforcement in the absence of a court order or consumer consent; FPF’s Best Practices would prohibit such disclosures in most cases. 

Below, we analyze SB 980’s approach to: (1) consent; (2) marketing; (3) privacy policies; (4) research; and (5) penalties and enforcement. We also examine (6) several other federal and state laws that currently regulate genetic privacy. 

  1. Consent for Genetic Data //

FPF’s Best Practices and SB 980 take similar approaches – requiring different methods of consent (express opt-in vs. opt-out), depending on the sensitivity and uses of the data. Both SB 980 and the Best Practices emphasize express, affirmative consent as a baseline requirement for collecting genetic information. They each require that companies provide opt-out consent mechanisms for consumers regarding use of non-genetic information, such as purchase histories or web browsing information.

FPF’s Best Practices require initial express consent for genetic information collection, as well as separate express consent for the use of genetic material outside of the initial scope of collection. Secondary express consent is also required before a company engages in the onward transfer of individual-level information or the use of genetic information for incompatible or materially different secondary uses. Companies are also required to provide additional consent measures for consumers or organizations that submit genetic information on behalf of others. In a similar vein, SB 980 would require prior authorization from consumers for the initial collection of their genetic information and separate authorization for each subsequent disclosure.

FPF’s Best Practices define express consent as a consumer’s statement or clear affirmative action in response to a clear, meaningful, and prominent notice, while encouraging companies to use flexible consent mechanisms that are effective within the context of the service, in-app or in-browser experience, and relationship between the company and individual. 

  1. Marketing //

The FPF Best Practices and SB 980 differ in their approach to consent for marketing and advertising purposes, including marketing on the basis of non-genetic information. The Best Practices prohibit companies from marketing to consumers on the basis of their genetic information, unless the consumer provides separate express consent for such marketing or marketing is clearly described in the initial express consent as a primary function of the product or service. Marketing to a consumer on the basis of their purchase history is permitted if the consumer is provided the option to opt-out of such marketing. Marketing to anyone under the age of 18 is prohibited. 

The Best Practices do not require companies to obtain opt-in consent or provide an opt-out for “customized content or offers by the company on its own websites and services.” This provision is intended to permit 1) contextual advertising (i.e., advertising that is tailored to the other content on a particular page on a website, rather than targeted to a particular user); and 2) first-party offers displayed to users on the basis of information within the same platform, such as when a logged in user receives an email offer based on information they viewed on the company’s own website while logged in. This approach aligns with leading privacy norms, including the approach taken by the Department of Health and Human Services in interpreting the Health Insurance Portability and Accountability Act (HIPAA), which exempts certain first-party communications related to treatment and health-related products from its definition of “marketing.” It is also consistent with the California Privacy Rights Act of 2020 (CPRA), the privacy ballot initiative that would establish rights to opt out of the sale and uses of sensitive health data and would codify a narrow exemption for non-personalized, contextual advertising.

Like FPF’s Best Practices, SB 980 also requires companies to obtain opt-in consent before marketing based on a consumer’s genetic data. A recent amendment would align SB 980’s and FPF’s approaches to marketing based on purchase history, requiring provision of an opt-out. However, a related SB 980 amendment would require companies to provide users with mechanisms to opt out of contextual advertising. This approach would be inconsistent with most leading norms, including HIPAA and the California Privacy Rights Act. This is because, in contrast to targeted or behavioral advertising, contextual advertising is not typically viewed as implicating significant privacy risks. Indeed, privacy advocates have cited contextual advertising as a privacy-protective model that displays marketing messages on web pages based on the content of the page, not information about an individual. 

  1. Privacy Policies // 

FPF’s Best Practices require companies to furnish privacy notices that are prominent, publicly accessible, and easy to read. The Best Practices require companies to include certain information within their policies, including standards regarding: data collection, consent, use, onward transfer, access, security, and retention/deletion practices. Furthermore, the Best Practices note that “a high-level overview of the key principles should be provided preceding the full privacy policy.” The overview should take the form of a short document or statement that provides basic, essential information, including whether the privacy policy for genetic information is different than that of other data (e.g. registration data, browsing (cookies or website) tracking, and/or personal information). 

Similarly, SB 980 would require all direct-to-consumer genetic or illness testing services companies to provide consumers with “clear and complete information regarding the company’s policies and procedures for the collection, use, and disclosure, of genetic data” through “a summary of its privacy practices, written in plain language” and “a prominent and easily accessible privacy notice.”  

  1. Research //

FPF’s Best Practices encourage the socially beneficial use of genetic information in research while providing strong privacy protections. This nuanced approach strikes a careful balance between the societal benefits of genetic research and individual’s privacy interests. The Best Practices require companies to obtain informed consent before using identifiable data for research, and promote research on strongly deidentified datasets. The Best Practices require companies to engage in consumer education and make resources available regarding the implications and consequences of research. 

The consumer genetic and personal genomic testing industry produces an unprecedented amount of genetic information, which in turn provides the research community the ability to analyze large and diverse genetic datasets. Genetic research enables scientists better understand the role of genetic variation in our ancestry, health, well-being, and more. In order to recognize the role of big data in corporate research and the difficulty of obtaining individual consent (see Omer Tene and Jules Polonetsky’s Beyond IRBs: Ethical Guidelines for Data Researchidentifying the regulatory gaps between federally funded human subject research and corporate research) the Best Practices recognize the important role of Institutional Review Boards (IRBs) and ethical review processes. 

FPF’s Best Practices also provide incentives for researchers and others to deidentify genetic data when practical. Deidentification of genetic information is an incredibly complex issue (see FPF and Privacy Analytics’s “A Practical Path Toward Genetic Privacy”), and the risk of reidentification of genetic data can be limited by rigorous technical, legal, and organizational controls.

SB 980 also requires informed consent before using data for research, “in compliance with the federal policy for the protection of human research subjects” — effectively the same standard as the FPF Best Practices. Similarly, SB 980 also promotes strong deidentification of data, meaning data that “cannot be used to infer information about, or otherwise be linked to, a particular identifiable individual,” provided it is also subject to public commitments and contractual obligations to not make attempts to reidentify the data.

  1. Penalties and Enforcement //

Companies that have publicly committed to comply with FPF’s Best Practices are subject to enforcement by the Federal Trade Commission (FTC) under the agency’s Section 5 authority to prohibit deceptive trade practices. State Attorneys General and other authorities have similar powers to bring enforcement actions against companies that violate broadly applicable consumer protection laws. 

SB 980 includes a tiered penalty structure, with negligent violations of the act subject to civil penalties not to exceed one thousand dollars ($1,000) and willful violations between $1,000 and $10,000 plus court costs. Penalties for wilful violations would be paid to the individual to whom the genetic information pertains. Penalties could add up quickly – they are calculated on a per violation, per consumer basis. Earlier versions of SB 980 included criminal penalties; the bill sponsors recently removed criminal liability in favor of a higher civil penalty, raising the maximum fine from $5,000 to $10,000. 

  1. Other Federal and State Laws //

In the United States, a growing number of sectoral laws are applicable to companies that process genetic information. The federal Genetic Information Nondiscrimination Act (GINA) prevents genetic discrimination in health insurance and employment, but GINA does not prohibit discrimination in life insurance, disability or long term care insurance, nor does it provide general privacy protections or limits on law enforcement uses. In an attempt to close regulatory gaps, several states have enacted legislation around law enforcement access to genetic information and discriminatory practices on the behalf of life insurance organizations

Key state laws governing genetic information include:

Conclusion //

Genetic and personal genomic tests increase consumers’ access to and control of their genetic information; empower consumers to learn more about their biology and take a proactive role in their health, wellness, ancestry, and lifestyle; and enhance biomedical research efforts. The consumer genetic and personal genomic testing industry is producing an unprecedented amount of genetic information, which provides the research community the ability to analyze a significantly larger and more diverse range of genetic data to observe and discover new patterns and connections. Access to genetic information enables researchers to gain a better understanding of the role of genetic variation in our ancestry, health, well-being, and much more. While genetic information poses incredible benefits, genetic information is also sensitive information that warrants a high standard of privacy protection. 

FPF’s Best Practices provide a model for strong privacy safeguards with detailed provisions that support clinical research and public health. Key portions of California SB 980 are consistent with the Best Practices, and would require all companies to provide consumers with important transparency, choice, and security safeguards. Several SB 980 amendments and provisions diverge from the Best Practices in important ways, including how the bill would treat contextual advertising and government access to data. 

Change Could be Soon Coming to the FTC, the Lead U.S. Agency on Privacy

The U.S. Presidential election is almost upon us, and it could have a big impact on the future of the Federal Trade Commission (FTC), the de facto national privacy regulator and law enforcer. The FTC lineup has been steady since 2018 but that could soon change – no matter who wins the election.

Prior to the appointment of the five current Commissioners, the FTC had only two serving. This happened because new Commissioners were not appointed by the President and confirmed by the Senate as they finished their terms and departed. Though all five current FTC Commissioners were appointed in 2018, their terms in office end years apart.

FTC Commissioners’ Terms

Commissioners serve seven-year terms, with appointment and expiration dates set on a staggered schedule. The FTC Act has been interpreted to mean that Commissioners’ seven-year terms run “with the seat,” so that the term expires on the scheduled date, regardless of when the Commissioner was appointed, confirmed, and sworn in. If a Commissioner’s replacement is not appointed at the end of their term, they may stay on until their replacement is seated. This is currently the case with Commissioner Chopra, whose term ended in September 2019.

Commissioners can be re-appointed. Sometimes they leave before the end of their term.

Commission Chairs often leave when there is a change in Presidential Administration. If FTC Chairman Joe Simons chose to step down, his vacancy could be filled by a new Chair, or a non-Chair Commissioner could be appointed and a sitting Commissioner elevated to Chair.

Nominating New Commissioners

Two of the five Commissioners must not be from the President’s political party. It’s typical for the Administration and Senate leaders to agree to “pair” appointees from each party when there is more than one vacancy in order to ease Senate confirmation. That would be unlikely in a new Democratic administration because there would not be a Republican vacancy unless more than one sitting Republican vacated their seats. But “pairing” can happen across agencies, with Senate leaders of each party agreeing to move the nominations they support as part of complicated bi-partisan agreements.

Although the current Commissioners reflect a range of ideological perspectives, the agency has generally been fortunate to be led by appointees recognized for their professionalism, integrity, policy smarts, and ability to collaborate across party lines – traits that will be valued in their eventual successors as well.

The European Commission Considers Amending the General Data Protection Regulation to Make Digital Age of Consent Consistent

The European Commission published a Communication on its mandated two-year evaluation of the General Data Protection Regulation (GDPR) on June 24, 2020 in which it discusses as a future policy development “the possible harmonisation of the age of children consent in relation to information society services.” Notably, harmonizing the age of consent for children across the European Union is one of only two areas in the GDPR that the Commission is considering amending after further review of practice and case-law. Currently, the GDPR allows individual Member States some flexibility in determining the national age of digital consent for children between the ages of 13 and 16. However, upon the two-year review, the Commission expressed concerns that the variation in ages across the EU results in a level of uncertainty for information society services–any economic activities taking place online–and may hamper “cross-border business, innovation, in particular as regards new technological developments and cybersecurity solutions.”

“For the effective functioning of the internal market and to avoid unnecessary burden on companies, it is also essential that national legislation does not go beyond the margins set by the GDPR or introduces additional requirements when there is no margin,” stated the Commission in its report. Some believe stringent child privacy requirements can push companies to abandon the development of online services for children to avoid legal risks and technical burdens, which creates a void for companies from countries with lax child privacy protections. In addition to the GDPR’s varying ages of digital consent, there are also differing interpretations of the obligations on information society services regarding children. For example, the United Kingdom’s proposed Age Appropriate Design Code defines a child as a person under the age of 18 and lays out additional requirements for information society services to build in privacy by design to better protect children online.

Prior to the GDPR, European data protection law did not include special protections for children, instead providing the same privacy protections across all age groups. The GDPR recognized that children are particularly vulnerable to harm and exploitation online and included provisions extending a higher level of protection for children. However, a universal consensus on the age of a child does not exist, and the flexibility provided by the GDPR creates a fragmented landscape of ages requiring parental consent across the EU. While complying with different ages of consent is relatively straightforward in the physical world where activities are generally limited within national boundaries, given the nature of online services operating across states, the lack of consistency of ages is a significant barrier for companies. Information society service providers are obliged to verify the age of a user, their nationality, and confirm the age of consent for children for that Member State prior to allowing access to their services. This burden may pose a competitive disadvantage for companies operating in the EU or result in measures depriving children and teens the benefits of using these services, as companies choose either to invest significant resources in age verification and parental consent mechanisms or to abandon the market for children and age gate their services instead.

The Commission also initiated a pilot project to create an infrastructure for implementing child rights and protection mechanisms online, which is scheduled to commence on January 1, 2021. The project aims to map existing age-verification and parental consent mechanisms both in the EU and abroad and assess the comprehensive mapping results to create “an interoperable infrastructure for child online protection including in particular age-verification and obtaining parental consent of users of video-sharing platforms or other online services.”

Currently, Member States require or recommend varying age verification and parental consent mechanisms. In addition to the UK’s Age Appropriate Design Code, the German youth protection law requires businesses to use scheduling restrictions to ensure that content harmful to children is not available during the day when children are online; to use technical methods to keep children from accessing inappropriate content, such as sending adults a PIN after age verification; or to use age labeling that youth protection software, downloaded by parents on their children’s devices, can read. However, the efficacy of these methods is unclear and unproven. As such, a sweeping review of existing methods may reveal best practices to be widely adopted within the EU and serve as a model for other countries, including the United States.

FPF & BrightHive Release Playbook to Create Responsible Contact Tracing Initiatives, Address Privacy & Ethics Concerns

A new playbook from the Future of Privacy Forum (FPF) and BrightHive, Responsible Data Use Playbook for Digital Contact Tracing, provides a series of considerations to assist stakeholders in setting up a digital contact tracing initiative to track and manage the spread of COVID-19, while addressing privacy concerns raised by these technologies in an ethical, responsible manner.

“Digital contact tracing technologies will play an instrumental role in localities’ responses to the COVID-19 pandemic, but these technologies – if designed, developed, and deployed without thoughtful planning – can raise privacy concerns and elevate disparities,” said FPF CEO Jules Polonetsky. “Contact tracing initiatives should take a measured approach to location tracking, data sharing, purpose limitations and proportionality. If deployed hastily, these technologies risk exacerbating existing societal inequalities, including racial, socioeconomic, and digital divides.”

As COVID-19 continues to spread through communities across the United States and abroad, public health officials are turning to digital contact tracing technologies (DCTT) as a means of tracking cases, identifying sources of transmission, and informing people who may have been exposed in order to prevent further transmission. For contact tracing to be effective, however, people must share sensitive personal information on their whereabouts or with whom they have been in close proximity so that their connections or locations can be mapped and tracked, raising ethical and privacy concerns.

“This playbook is intended to support coalitions, including public health agencies and application developers, in designing and implementing a digital contact tracing initiative,” said Natalie Evans Harris, Head of Strategic Initiatives at BrightHive. “The playbook provides a series of considerations that purposefully address privacy concerns and support the development of ethical and responsible digital contact tracing protocols.”

The playbook walks stakeholders interested in setting up a digital contact tracing initiative through a checklist of actions, from coalition-building in support of the initiative to implementation across the lifecycle of the initiative. To learn more, read the playbook.

 

About FPF

The Future of Privacy Forum (FPF) is a non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting fpf.org.

About BrightHive

BrightHive helps organizations, networks and communities securely and responsibly link their data to enhance their impact, empower individual and collective decision making, and increase equity of opportunity. Learn more by visiting brighthive.io

FPF Welcomes New Members to the Youth & Education Privacy Project

We are thrilled to announce two new members of FPF’s Youth & Education Privacy team. The new staff –  Juliana Cotto and Dr. Carrie Klein – will help expand FPF’s technical assistance and training, resource creation and distribution, and state and federal legislative tracking.

You can read more about Juliana and Carrie below. Please join us in welcoming them to the team!


julianaheadshot 1

Juliana Cotto

Juliana Cotto is a Policy Fellow for the Youth & Education Privacy Project at the Future of Privacy Forum. Juliana is primarily supporting FPF’s development of K-12 student privacy resources for educators, families, and students in addition to evaluating applications for the Student Privacy Pledge. Prior to joining FPF, Juliana was a graduate intern at Consumer Reports where she worked on consumer protection issues in financial services and data collection practices of financial technologies and other products. Previous to pursuing a career in policy, Juliana was an elementary school teacher for three years during which she taught for both Chicago Public Schools and Saint Louis Public Schools through the Teach for America program.

Juliana is a 2020 graduate from Carnegie Mellon University Heinz College where she earned her Master of Science in Public Policy & Management. Juliana also holds a Master’s degree in Education from the University of Missouri Saint Louis. She earned her Bachelor’s Degree from Johns Hopkins University, where she majored in Behavioral Biology.

I am most excited about contributing to FPF’s Youth & Education Privacy team’s work in developing useful and practical resources for educators to better leverage technology in their classrooms, while upholding strong student privacy protections.


Dr. Carrie Klein

Dr. Carrie Klein is a Senior Fellow and higher education lead on the Future of Privacy Forum’s Youth and Education team. Carrie’s work primarily focuses on advancing conversations, research, and consensus related to higher education privacy. Her work and experience bridge higher education, big data, and law. Prior to FPF, Carrie worked on a National Science Foundation grant at George Mason University (Mason) focused on the use of big data in higher education and as a strategic planning project manager in Mason’s office of the president. She was also the lead for the Federal Trade Commission’s honors paralegal program, where she worked on antitrust cases. She has presented and published numerous pieces on higher education’s use of data, higher education privacy policies, and equity in higher education. Carrie is a graduate of George Mason University and The University of Arizona.

I am looking forward to contributing to FPF’s Youth and Education team’s already strong commitment to, knowledge of, and work on educational privacy and am especially excited advancing educational privacy considerations in the higher education space.


Interested in student privacy? Subscribe to our monthly education privacy newsletter here. Want more info? Check out Student Privacy Compass, the education privacy resource center website.

What to Expect from the Court of Justice of the EU in the Schrems II Decision This Week

A decision of the Court of Justice of the European Union (CJEU), expected for this Thursday, may have major consequences on the dataflows coming from the EU to the United States, as well as to most of the other countries in the world. Two key legal mechanisms that ensure personal data of Europeans are protected when transferred from Europe to the US are under scrutiny: (1) the EU-US Privacy Shield framework (Privacy Shield) and (2) the Controller-Processor Standard Contractual Clauses (SCC) 2010 Decision of the European Commission. The latter also ensures that transfers of personal data originating from the EU to other countries elsewhere in the world enjoy safeguards.

If the Court decides that neither of these mechanisms meets the criteria for respecting fundamental rights under the EU Charter of Fundamental Rights, virtually all dataflows from EU Member States to the US will remain without a lawful ground and can potentially be suspended either immediately by the companies transferring data which will not want to risk hefty fines, or through orders from European Data Protection Authorities, until a new legal mechanism for transfers is put in place. An invalidation of the 2010 SCC Decision would also lead to transfers from the EU to other countries like China or India being left outside the law. The CJEU can potentially rule on the validity of both instruments, or only on the SCC Decision, leaving out the assessment of the Privacy Shield (as was recommended by the Advocate General of the Court in an Opinion published on December 19, 2019).

A complicated case

The CJEU was asked by the High Court of Ireland whether the European Commission’s Decision that establishes Controller-Processor SCCs is valid under EU law. A challenge to its validity was raised before the High Court in Ireland by the Irish Data Protection Commissioner (DPC) in a case concerning a complaint submitted to the DPC by Maximillian Schrems regarding the transfer of his personal data from Facebook Ireland (Europe) to Facebook Inc. (US). This transfer is being done relying on SCCs (standard clauses) that the two entities entered into, which are based on a Decision adopted by the European Commission.

As a rule, the EU General Data Protection Regulation (GDPR) allows transfers of personal data from the EU to countries outside the EU only if an adequate level of protection is afforded to the data, which should not undermine the level of protection that the GDPR confers to personal data of Europeans. Some countries’ legal frameworks are declared adequate by the European Commission at the end of a formal process, meaning that dataflows from the EU to those countries can occur with no restrictions. Where such adequacy decisions are not in place, SCCs allow for companies to enter into a contract with pre-determined content (established through the SCC Decision of the Commission) that provides safeguards for personal data once it is transferred from the EU to a country outside the EU.

Schrems takes the position that his personal data transferred to the US on the basis of the SCC Decision are not adequately protected due to the broad access to electronic communications data that US government agencies have under their national security mandate and a lack of effective judicial remedies for non-US persons in relation to these practices. In accordance with the SCC Decision and with powers granted by the General Data Protection Regulation, the Irish DPC can suspend a specific transfer if the Commissioner finds that the legal regime in the country of destination (in this case, the US) does not afford an adequate level of protection to personal data transferred from the EU.

The Irish DPC challenged the validity of the SCC Decision that sets up this mechanism, one of the arguments being that the SCC Decision does not ensure an effective judicial remedy against government access to data for Europeans once their personal data are transferred to the US. On the other side, Schrems maintains that the SCC Decision is valid under EU law and that the Irish DPC should use the powers granted to it by the SCC Decision and the GDPR to assess the level of protection granted by the US legal framework and eventually to suspend the transfer of his data to the US.

How could a ruling on SCCs affect the Privacy Shield?

The CJEU found in 2015, in the first iteration of this same case, that the predecessor of the Privacy Shield program, the EU-US Safe Harbor framework, was invalid since it did not ensure an adequate level of protection of personal data transferred to the US, in accordance with the fundamental rights of respect for private life and an effective judicial remedy under the EU Charter of Fundamental Rights. The European Commission and the US Government negotiated a new framework, the EU-US Privacy Shield, which was adopted in 2016. The Privacy Shield program was found by the European Commission to ensure an adequate level of protection for the personal data transferred to the US to those companies that are self-certifying with the Department of Commerce as participating in the framework. Currently, 5,378 companies have registered as transferring data from the EU to the US on the basis of the Privacy Shield, both from the US and from Europe, as shown in this recent study published by the Future of Privacy Forum.

The Privacy Shield may now be subject to scrutiny by the CJEU in addition to the SCC Decision, depending on whether the Court will find it useful or not to assess it for the outcome of the main proceedings in this case. A top advisor of the Court, Advocate General Saugmandsgaard Øe, recommended in a non-binding Opinion that the CJEU limits its assessment to the SCC Decision and declares it valid. However, he also mentioned that if the Court were to consider an assessment of the Privacy Shield necessary for the outcome of the case in Ireland, the Court should find that, similar to its predecessor, it does not respect the fundamental rights framework of the EU.

Possible outcomes of the case

From the outset it should be clear that the CJEU often finds original solutions to complicated questions, so it is challenging to predict how it will decide in an individual case. For example, in a landmark case from 2014, Digital Rights Ireland, it decided to invalidate the entire Data Retention Directive, even if only the validity of a specific provision of that directive was raised in the proceedings. The following paragraphs merely map out some of the different possible outcomes of the case and refer to potential consequences to global dataflows, but they are by no means exhaustive.

On Thursday, perhaps the only certainty is that the CJEU will provide a judgment on the validity of the 2010 SCC Controller-Processor Decision. It could follow the AG Opinion and declare it valid, or it could find that the Irish DPC is right, and declare it invalid.

Invalidation of the SCC Decision without a transition period: If the SCC Decision is declared invalid and the Court does not provide for a transition period, this means that all transfers of personal data from the EU to countries outside the EU relying on that SCC decision will become unlawful. It is also likely that, by analogy, the Controller-to-Controller SCC Decision will be declared invalid too. This will not only affect the personal data transferred to the US on the basis of SCCs, but also the data transferred elsewhere on the basis of SCCs, like China, Singapore, India, Brazil and all other countries which do not have an adequacy decision.

As a consequence, companies may decide they will proactively suspend all transfers based on SCCs, effective immediately, in order to not risk GDPR fines for unlawfully transferring personal data outside the EU. Another option is to continue the transfers in practice, but this would be outside of the law. Theoretically, they could also rely on a fallback plan, but there is no immediate solution to provide an alternative lawful mechanism for transfers. The other options provided by the GDPR, like Binding Corporate Rules, certification mechanisms and Codes of Conduct (CoC) take a long time to be approved by Data Protection Authorities and very few are in place (particularly BCRs; there are currently no CoC or certification schemes approved for data transfers). They could also rely on one of the derogations allowed by the GDPR, like consent of those individuals whose data is transferred, but this would also risk bringing them outside the law, since derogations need to only apply in exceptional cases and not for repetitive, nor massive transfers, as per guidance from the European Data Protection Board.

However, it should be noted that the European Commission has been working for the past year to update its SCCs decisions to take into account new GDPR provisionsand it is very likely that the Commission will soon, or even very soon, adopt the new updated SCCs once it will also bring them in line with the requirements of the Court as laid out on Thursday. So there is a possibility that there will only be a short gap before the new SCCs are adopted, even if the Court invalidates the 2010 SCC Decision.

Validity of the SCC Decision is recognized: If the Court upholds the validity of the 2010 SCC Decision, then the dataflows from the EU to the rest of the world based on SCCs can continue uninterrupted. The Commission will nonetheless publish updated SCCs sometime in the near future as expected in accordance with the GDPR, but there will be no gap during the transition from the old to the new ones. Upholding the SCC Decision also means the Irish DPC will likely have to act one way or another in relation to the original complaint submitted by Schrems regarding the transfer of his data to Facebook Inc. in the US. If the DPC suspends that transfer on account of the level of protection afforded to personal data in the US, this may lead to claims by other data subjects to suspend the transfer of their data as well to all companies in the US that rely on SCCs. Those requests will need to be dealt with on a case-by-case basis. Regardless of what decision the DPC makes, challenges to it should be expected from any of the parties involved.

Possible assessment of the Privacy Shield:As for the validity of the EU-US Privacy Shield, the Court has the option of whether to assess it or not. If it will follow the AG Opinion, then the Privacy Shield will not be assessed and the dataflows based on it will continue uninterrupted for now. If the Court decides to assess the Privacy Shield, the Commission will have new criteria for its future adequacy (re)assessments. If the Court finds it valid, it would be interesting to see how the Court differentiates this finding from its existing case-law under the first Schrems judgment in 2015.

Invalidation of the Privacy Shield without a transition period: If the Court decides to assess the Privacy Shield and finds it invalid, then all dataflows relying on this framework will become unlawful. Transatlantic dataflows have been in this position before, after the first Schrems judgment in 2015, but at that time, companies had as fallback plan the possibility to enter SCCs while the US Government and the European Commission were agreeing on a new general framework for transfers. If both the SCC Decision and the Privacy Shield are declared invalid by the same judgment, at the same time, lawful dataflows from the EU will come to a standstill for a while, unless they are going to one of the 12 countries which currently have an adequacy decision or are based on the few approved BCRs or the exceptional derogations.

Assessment of the Privacy Shield without a decision on its validity: One other possibility is for the Court to engage in an assessment of key provisions of the Privacy Shield as obiter dictumand without reaching a conclusion regarding its validity. Such assessment could serve as guiding principles for the European Commission in its next annual evaluation of the effectiveness of the Privacy Shield, as well as in (re)assessing the adequacy of countries or regions/states within federal countries.

Regardless of how the CJEU will rule in this case, the judgment will have consequences for the future of global dataflows.

DCU & FPF Webinar – The Independent and Effective DPO: Legal and Policy Perspectives

On July 8th, Dublin City University (DCU) and the Future of Privacy Forum (FPF) jointly organized the webinar “The Independent and Effective DPO: Legal and Policy Perspectives.” The webinar was designed to help policymakers, regulators, and their staff better understand legal views concerning the position of the Data Protection Officer within an organization. The first half of the discussion centered around the involvement and independence of the DPO from the perspective of European data protection regulators, while the second half of the webinar explored how DPOs perceive their role. Guest speakers included European data protection regulators and DPOs from leading companies. 

To learn more, watch the recording on our YouTube channel.