Future of Privacy Forum Adds Amie Stepanovich, Additional Experts to U.S. & Global Policy Teams
New staff members add expertise and expand US policy engagement for independent data protection non-profit
The Future of Privacy Forum (FPF) has added three new members to its U.S. policy team and a senior fellow to its global team. Amie Stepanovich will join FPF as VP of U.S. Policy, Keir Lamont joins as Senior Counsel, Tatiana Rice joins as Policy Counsel, and Simon McDougall joins as Senior Fellow, Global. FPF’s Stacey Gray assumes a new role as Director of Legislative Research & Analysis.
“FPF welcomes a widely respected voice in privacy law to our team in Amie Stepanovich,” said John Verdi, FPF’s SVP of Policy. “Amie is a leading thinker with deep experience in privacy law and human rights, which makes her an invaluable advisor to policymakers, industry leaders, and academics studying the intersection of tech and data protection.”
Amie will join FPF in January of 2022 as VP of U.S. Policy. Before joining FPF, Amie served as the Executive Director of Silicon Flatirons, a center at the University of Colorado, Boulder focused on convening multi-stakeholder discussions and developing the next generation of technology lawyers and policy experts. Amie also previously served as U.S. Policy Manager and Global Privacy Counsel at Access Now where she worked to protect human rights through law and policy involving technologies and their use. Prior to her time at Access Now, Amie was the Director of the Domestic Surveillance Project at the Electronic Privacy Information Center. Amie has also served on FPF’s Advisory Board.
In a further expansion of FPF’s U.S. team, Keir Lamont and Tatiana Rice have joined the organization to focus on legislative research and analysis in the United States.
“Keir and Tatiana will grow FPF’s ability to serve as an independent voice on complex legislative and policy matters at the state and national levels,” said Stacey Gray, FPF’s newly-named Director of Legislative Research & Analysis. “As the national conversation about data privacy and tech ethics continues, FPF will support policymakers with informational resources on new technologies and regulatory approaches through our public engagement, publications, testimony, events, and other programs.”
Keir joins FPF as Senior Counsel on the legislation team, where he will support policymaker education and independent analysis concerning federal, state, and local consumer privacy laws and regulations. Prior to joining FPF, Keir worked as Policy Counsel for the Computer and Communication Industry Association, where he focused on issues related to privacy, security, and emerging technology. Before joining CCIA, Keir managed the Program on Data and Governance at The Ohio State University Moritz College of Law. He was previously a fellow at ZwillGen and Access Now.
Tatiana joins FPF as Policy Counsel on the legislation team, where she will analyze legal and legislative trends relating to data privacy and emerging technologies on both the federal and state levels. Tatiana comes to FPF from Shook, Hardy & Bacon LLP, where she led biometric compliance efforts and assisted clients with managing data privacy compliance, litigation, and investigation.
Simon McDougall joins FPF as a Senior Fellow, working closely with the FPF Global team. Simon previously was a member of the Executive Team and Management Board of the UK Information Commissioner’s Office. He established the Regulatory Innovation and Technology Directorate, led the ICO’s response to the Covid pandemic, and worked with the CMA, FCA, and Ofcom to establish the Digital Regulation Cooperation Forum. Prior to this appointment, Simon led a global privacy consulting practice at Promontory, an IBM company, leading projects across Europe, the U.S., and Asia. He previously led a similar team for Deloitte in the UK.
FPF brings together privacy experts to explore the challenges posed by technological innovation and develop privacy protections, ethical norms, and workable business practices. FPF believes lawmakers and regulators make better policy decisions when they understand the key technologies, business practices, and legal tools available to regulate privacy and data protection.
FPF’s Stacey Gray Testifies Before Senate Finance Committee Regarding Data Brokers, Urges Congress Pass a Comprehensive Federal Privacy Law
Today, Future of Privacy Forum Senior Counsel Stacey Gray testified before the U.S. Senate Finance Subcommittee on Fiscal Responsibility and Economic Growth regarding consumer privacy in the technology sector.
Stacey’s testimony explains that the term “data brokers” typically encompasses a wide variety of companies and business practices that use personal information for different purposes, some of which directly benefit consumers, and others that primarily benefit the purchasers or users of data. Recent laws and proposed bills define data brokers as entities without a direct relationship with consumers, and this third-party data processing is at the heart of concerns around privacy, fairness, and accountability; the third-party relationship also presents a challenge for crafting effective regulation. While a “first party” company that collects and uses personal data can exercise enormous influence and market power, there is still some degree of public accountability to users who are aware that the company exists and can delete accounts or raise alarms when practices go too far. In contrast, a business lacking a direct relationship with consumers – like a data broker – does not have the same reputational interests, business incentives, or in some cases legal requirements, to limit the collection of consumer data or protect it against misuse.
The lack of a consumer relationship also means that businesses engaged in legitimate data processing often cannot rely on the traditional privacy mechanisms of notice and choice. Meaningful affirmative consent, or “opt-in,” may be impossible or impractical for a business to obtain, while “opting out” after the fact tends to be both inadequate as a safeguard and impractical for consumers to navigate. Consumers can become overwhelmed with choices, and often lack the knowledge to assess future risks, complex technology, or future secondary uses.
What does this all mean? First and foremost, Congress should pass baseline comprehensive privacy legislation that establishes clear rules for both data brokers and first-party companies that process personal information. It should address the gaps in the current U.S. sectoral approach to consumer privacy; and it should incorporate but not rely solely on consumer choice: a privacy law should also codify clear limits on the collection of data; in accountability measures such as transparency; risk assessment and auditing; limitations on the use of sensitive data; and limits on retention.
In the absence of comprehensive legislation, there are a number of steps Congress can take to address risks related to consumer privacy and data brokers, including 1) empowering the Federal Trade Commission to continue using its authority to enforce against unfair and deceptive trade practices through funding; staff; the establishment of a Privacy Bureau; and civil fining authority; 2) limiting the ability of law enforcement agencies to purchase information from data brokers; 3) enacting sectoral legislation for uniquely high-risk technologies, such as facial recognition; or 4) updating existing laws, such as the Fair Credit Reporting Act, to more effectively cover emerging uses of data.
FPF will continue to provide expert testimony to governing bodies and organizations to shape privacy best practices and policies, both in the United States and globally.
“Are crumbles all that remains of the cookies?” A conversation on the future of ad tech at the Nordic Privacy Arena 2021
On September 27 and 28, 2021, the Swedish Data Protection Forum (Forum för Dataskydd) hosted the 2021 edition of the Nordic Privacy Arena (“Operationalising Data Privacy – Challenges, best practices, and success stories”) in Stockholm, Sweden. This hybrid event brought together privacy practitioners, watchdogs, and academics to debate some of the most pressing issues regarding privacy compliance, such as artificial intelligence (AI), cybersecurity risks, international data transfers, age-appropriate web design, and new enforcement trends.
The end of the first day saw a discussion on online advertising moderated by the Future of Privacy Forum’s Managing Director for Europe, Dr. Rob van Eijk. The panel, entitled “Algorithmic marketing and profiling – are crumbles all that remains of the cookies?”, counted on the valuable contributions of Dr. Anu Talus, Finish Data Protection Ombudsman (DPA), Michael Hopp, Partner and Head of the Plasner law firm’s Data Protection team, Anna Eidvall, Partner and Head of the MAQS Advokatbyrå’s law firm’s privacy and data protection practice, and Patrick Breyer, Member of the European Parliament (MEP) for the Greens/EFA group.
The session was divided into four parts, which are covered in this blogpost: (1) a debate on cookie consent tools: can browser settings do the job, as debate zooms in on data collection practices, notably around the suitability of relying on users’ browser settings?; (2) a discussion about the pros and cons of banning all or some targeted ad practices; (3) the speakers’ views on what to expect from ePrivacy Regulation negotiations over the coming months; and (4) an interesting exchange on whether contextual advertising is a silver bullet or a distant reality.
Cookie consent tools: can browser settings do the job?
Van Eijk started by pointing to the ways in which the Finish Telecom regulator (Traficom) recently-issued guidance on cookies and other tracking technologies advised service providers to collect website visitors’ consent. He noted that the guidelines drew inspiration from two decisions taken by the Helsinki Administrative Court in April 2021, by excluding browser settings from the list of appropriate means in which users may express their consent for the placement of cookies in their devices.
In response, Talus underlined that Traficom’s guidance had been issued only two weeks prior, after extensive work with her Office during the drafting process. She also expressed that she was pleased with the outcome, as it reflects the Data Protection Authority’s (DPA) longstanding position on cookie consent. Regarding browser settings, Talus stressed the difficulty of collecting GDPR-aligned consent through such means, although Recital (32) of the GDPR seems to indicate that this is theoretically possible.
Van Eijk then asked MEP Breyer about his thoughts on the Advanced Data Protection Control (ADPC) specification proposed by None of Your Business (NOYB), notably on whether this type of framework — similar to the Do-Not-Track (DNT) specification — have a future in the European regulatory discussion. In reply, Breyer stated that the ADPC addressed the issue of users’ “cookie-fatigue”, in that it proposes a practicable solution to enable the latter to make and website owners to respect those choices. He also took the view that such proposals could positively avoid leaving browser manufacturers free to establish default settings.
Then, the panel touched on the question: are online players unwilling to correctly configure their consent banners, in line with current legal standards? In this regard, Eidvall noted the complexity of the legal framework in this space, with an interplay of the GDPR with the ePrivacy Directive rules, and with Telecom and data protection regulators both playing a role in some jurisdictions, such as Sweden. For the speaker, the first dimension meant that consent from internauts should be sought at two levels: one at the moment of placing the cookie on/collecting device information from a user’s device and another for processing the data for ad targeting or other purposes. It also meant that controllers will often need to carry out Data Protection Impact Assessments (DPIA) and — after the Schrems II CJEU ruling — Transfer Impact Assessment (TIA), as well as comply with cumbersome information requirements towards users, the importance of which the recent DPC sanction against Whatsapp Ireland illustrated.
Furthermore, Eidvall added, some smaller businesses (such as online publishers) may be unwilling to change their current cookie practices, as they are often in a “do-or-die” situation: should they decide not to deploy behavioral ads, their revenues may significantly decrease. Thus, she argued that the change should be championed by large tech companies. However, she noted that significant change on their part is unlikely to come unless the risk of being sanctioned becomes higher than the business benefits of using cookies.
Hopp observed that companies are now questioning whether they are required to comply with the privacy rules that are effective in the jurisdiction where the placement of cookies takes place, or with those in their country of establishment, as there have been significant challenges to the latter view. He also noted that there is a lack of clarity around consent requirements when it comes to online tracking, which the EDPB tried to sort to some extent but that will hopefully be resolved by the new ePrivacy Regulation.
To wrap up the first topic, van Eijk highlighted that the EDPB has tried to reach some harmonization of EU DPAs’ views on cookie consent through its own guidance, which weighs in on “cookie walls” and user affirmative action following the Planet49 case. Additionally, some DPAs are specifically requiring consent to be as easy to refuse as to give, when it comes to placing cookies or similar technologies. National conflicting court decisions on cookie consent may also be a blind spot for companies with an online presence when devising compliance strategies.
Pros and cons of banning all or some targeted ad practices
To stir the debate during the panel’s second segment, van Eijk mentioned EU lawmakers’ discussions on the Digital Services Act’s (DSA) rules on targeted online advertising. In that context, some MEPs favor a strict and encompassing ban on those practices, while others favor narrower prohibitions or only enhanced transparency duties. The moderator was keen to hear speakers’ views, notably on whether consent could serve as a proportionate solution for legitimizing current ad tech practices.
Breyer looked back on last year’s European Parliament (EP) requests to the European Commission (EC) to phase-out personalized online ads, in favor of contextual ones, which do not rely on personal data processing. One of the reasons for which he does not support consent-based targeted ads relate to the fact that users are currently being deprived of real choices, due to the use of “dark patterns” that make it more cumbersome for them to reject tracking. Another reason mentioned by Breyer was that, even in cases where individuals are given a fair choice, there are societal issues associated with a targeted advertisement, leading to more than just individual harms. In this regard, he mentioned that the technology that is deployed by undertakings to understand and predict the behavior of online consumers is being leveraged to threaten democracies through the spread of disinformation.
The MEP also mentioned that targeted advertising generates issues in the online media landscape. One of the problems he identified was media outlets’ heavy loss of revenues to targeting companies and ad brokers. He believes that forcing online media to rely on contextual advertising — as newspapers and TV networks do — would create a level playing field that would enable the preservation of professional and quality media. Breyer also noted that, despite a growing number of EP lawmakers now believing that an opt-in standard for targeted online ads is not the solution, there does not seem to be a majority favoring a ban, which could also hamper the strength of the EP’s position in future negotiations with the Council of the EU on the DSA.
The conversation then shifted to the use of metadata in the context of Real-Time-Bidding (RTB) requests, and whether a specific ban there would be appropriate. Hopp answered in the negative, instead favoring reliance on self-regulation instruments and clearer regulatory guidance on online advertising. He mentioned that, nonetheless, there are areas in which all the players in the ecosystem should agree that targeting ads is not possible, such as deliberately rendering consumer loan ads to individuals with a high interest in online betting. On the other hand, he also proposed that regulators could prohibit certain practices by relying on the GDPR’s general Article 5 principles, regardless of whether controllers rely on consent or legitimate interests to carry out personalized advertisement.
Eidvall concurred, stating that legal bans are seldom effective. Instead, the speaker advised companies in the online advertising space to look at the issue from a data ethics perspective. She stressed that undertakings ought to start thinking about whether certain processing operations that are technically possible are also morally sound, before implementing their digital marketing strategies.
This led to a debate about whether this type of reflection actually happens in self-regulatory frameworks, and about how enforcement takes place in such scenarios. Is it fair to leave it to browser and app manufacturers to shape the ecosystem by limiting what ad tech providers can technically do as Apple did with its App Tracking Transparency? Eidvall took a positive view of such developments, including Google’s phasing-out of third-party cookies, which is scheduled for 2023. She also stressed the importance of avoiding turning privacy into a class issue, which could be done by allowing users who wish to pay with their data to do so, while ensuring that alternative payment methods are available to all.
Van Eijk then took stock of varying cookie banner configurations and enforcement trends that are seen across Europe, with the French CNIL’s compliance notices and NOYB’s letters to website owners aiming at fixing some practices. He wondered about the part that enforcement standard-setting bodies and trade organizations, such as the Interactive Advertising Bureau (IAB), could play in the future.
On this note, Hopp acknowledged the importance of the IAB’s role in relation to its members but focused on what consumers could do to change the paradigm. He noted that, as more people become aware of their privacy rights, it is possible that the number of complaints in the face of infringements will increase. He finished by admitting that some providers may be making deliberate choices to overlook compliance in this realm to maximize their revenues and that collecting valid consent may not suffice to place them under a good light in the public eye.
On whether the design of fair opt-in mechanisms for online targeting would help fight ubiquitous dark patterns, Breyer observed that users tend to reject tracking when they are given a meaningful choice, as illustrated by Apple’s iOS 14.5 launch. Nonetheless, he noted that website owners who deploy “cookie walls” argue that they generally manage to obtain users’ consent. According to the MEP, this is due to the fact that the majority of cookie banners do not provide fair choices to users, as it is currently hard for them to identify the correct path to reject tracking in most websites. The panelist added that it should not be possible to subject a user to consent requests each time they open a new website, nor for website owners to reject access in case users refuse consent. He argued that the information that data brokers can gather about internauts is often very sensitive and that it could be used to manipulate or blackmail the latter. This, according to Breyer, reinforces the argument for banning targeted ads, also because research has shown that publishers’ revenues are not meaningfully affected in case they replace personalized ads with contextual ones.
What to expect from ePrivacy Regulation negotiations?
van Eijk invited the speakers to make some predictions about how and how fast the ePrivacy Regulations trialogue between the EU lawmakers will progress, also given that France will take over the Council’s Presidency in January 2022.
Breyer pointed out that France has taken a very harsh stance in ePrivacy negotiations within the Council, notably coming up with data retention language for the Council’s negotiating mandate. After stressing that the Court of Justice of the European Union (CJEU) has consistently ruled that indiscriminate data retention for law enforcement purposes breaches the EU Charter of Fundamental Rights, the MEP revealed that the EP is not willing to compromise at any price in the ePrivacy saga. He predicted that the EP would not accept watering down the existent level of electronic communications confidentiality protection under the ePrivacy Directive, in particular when it comes to the purpose limitation principle.
Talus identified the ePrivacy Regulation as an opportunity for the EU to clarify DPAs’ competencies when it comes to enforcing electronic communications privacy rules. Currently, many countries — including Finland — reserve enforcement powers to their Telecom regulators in this space. Talus believes that companies and individuals do not benefit from the blurring of each authority’s competencies, and that when it comes to personal data processing, DPAs should take the lead, also to ensure the coherent application of the GDPR and ePrivacy norms.
Eidvall stated that, regardless of whether the French Presidency will be able to advance ePrivacy negotiations, mounting enforcement and self-regulation — but also data subject awareness — is likely to happen. In response to a question raised by van Eijk on the impact that the upcoming final Belgian DPA decision in the IAB RTB case promises to have on self-regulation instruments, Eidvall mentioned other relevant inspections that are ongoing, like the ones triggered by NOYB’s complaints.
Hopp expressed that regulators are expected to come up with a solution to the cookie conundrum even if the ePrivacy Regulation is not approved. On van Eijk’s question of whether the GDPR already provides grounds for banning dark patterns and conditional consent practices (like cookie walls), Hopp underlined that the question of consent validity is clearly answered in the GDPR, including when it comes to “mandatory consent” practices in news websites.
Contextual advertising: a silver bullet or a distant reality?
Following Breyer’s calls for a paradigm shift towards contextual online ads, the moderator referred to how the Dutch public broadcaster (NPO) applied such techniques and actually bolstered its advertising revenues. Therefore, he asked the panelists whether the innovation chances in the contextual advertising sphere were worthy of further exploration.
Eidvall mentioned that her clients often express interest in using anonymization techniques in the online advertising space, to find alternatives that would be equally effective without processing personal data. However, she noted that anonymization itself qualifies as “processing” under the GDPR. In any case, she reported on a number of initiatives that seek to eliminate personal data from the process, also relying on ethical approaches as a unique selling point.
Hopp noted his clients’ lack of appetite for combining, e.g., differential privacy with contextual ads for measuring the reach of their ad campaigns. Instead, he highlighted their concerns about the phasing out of third-party cookies and their wishes to deploy first-party cookies for ad measurement. In this regard, Hopp took the view that anonymizing first-party data for strict measurement purposes should not be legally necessary, as long as companies comply with the purpose limitation principle and do not leverage it for user profiling.
To conclude, van Eijk stated that the lawfulness of first-party data use in the online context depends on the impact on the rights and freedoms of individuals, as well as the nature of the data at stake. In the moderator’s view, processing browsing behavior, children’s and special categories of data for targeting purposes may have unbearable risks. He pointed to groups who are trying to reach a consensus on what is “privacy by design” in the online advertising context, such as working groups at the W3C. In this regard, it is worth keeping an eye on the change announced by Google to move away from the FLoC identifier to more topic-based data as a more privacy-friendly solution changing the paradigm of the online advertising ecosystem.
Organizations must lead with privacy and ethics when researching and implementing neurotechnology: FPF and IBM Live event and report release
A New FPF and IBM Report and Live Event Explores Questions About Transparency, Consent, Security, and Accuracy of Data
The Future of Privacy Forum (FPF) and the IBM Policy Lab released recommendations for promoting privacy and mitigating risks associated with neurotechnology, specifically with brain-computer interface (BCI). The new report provides developers and policymakers with actionable ways this technology can be implemented while protecting the privacy and rights of its users.
“We have a prime opportunity now to implement strong privacy and human rights protections as brain-computer interfaces become more widely used,” said Jeremy Greenberg, Policy Counsel at the Future of Privacy Forum. “Among other uses, these technologies have tremendous potential to treat people with diseases and conditions like epilepsy or paralysis and make it easier for people with disabilities to communicate, but these benefits can only be fully realized if meaningful privacy and ethical safeguards are in place.”
Brain-computer interfaces are computer-based systems that are capable of directly recording, processing, analyzing, or modulating human brain activity. The sensitivity of data that BCIs collect and the capabilities of the technology raise concerns over consent, as well as the transparency, security, and accuracy of the data. The report offers a number of policy and technical solutions to mitigate the risks of BCIs and highlights their positive uses.
“Emerging innovations like neurotechnology hold great promise to transform healthcare, education, transportation, and more, but they need the right guardrails in place to protect individuals’ privacy,” said IBM Chief Privacy Officer Christina Montgomery. “Working together with the Future of Privacy Forum, the IBM Policy Lab is pleased to release a new framework to help policymakers and businesses navigate the future of neurotechnology while safeguarding human rights.”
FPF and IBM have outlined several key policy recommendations to mitigate the privacy risks associated with BCIs, including:
Rethinking transparency, notice, terms of use, and consent frameworks to empower people around uses of their neurodata;
Ensuring that BCI devices are not allowed for uses to influence decisions about individuals that have legal effects, livelihood effects, or similar significant impacts—such as assessing the truthfulness of statements in legal proceedings; inferring thoughts, emotions or psychological state, or personality attributes as part of hiring or school admissions decisions; or assessing individuals’ eligibility for legal benefits;
Promoting an open and inclusive research ecosystem by encouraging the adoption of open standards for the collection and analysis of neurodata and the sharing of research data with appropriate safeguards in place.
Policymakers and other BCI stakeholders should carefully evaluate how existing policy frameworks apply to neurotechnologies and identify potential areas where existing laws and regulations may be insufficient for the unique risks of neurotechnologies.
FPF and IBM have also included several technical recommendations for BCI devices, including:
Providing hard on/off controls for users;
Allowing users to manage the collection, use, and sharing of personal neurodata on devices and in companion apps;
Offering heightened transparency and control for BCIs that send signals to the brain, rather than merely receive neurodata;
Utilizing best practices for privacy and security to store and process neurodata and use privacy enhancing technologies where appropriate; and
Encrypting sensitive personal neurodata in transit and at rest.
FPF-curated educational resources, policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are available here.
Read FPF’s four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.
Dispatch from the Global Privacy Assembly: The brave new world of international data transfers
The future of international data transfers is multi-dimensional, exploring new territories around the world, featuring binding international agreements for effective enforcement cooperation and slowly entering the agenda of high level intergovernmental organizations. All this surfaced from notable keynotes delivered during the 43rd edition of the Global Privacy Assembly Conference, hosted remotely by Mexico’s data protection authority, INAI, on October 18 and 19.
“The crucial importance of data flows is generally recognized as an inescapable fact”, noted Bruno Gencarelli, Head of Unit for International Data Flows and Protection at the European Commission, at the beginning of his keynote address. Indeed, from the shockwaves sent by the Court of Justice of the EU (CJEU) with the Schrems II judgment in 2020, to the increasingly poignant data localization push in several jurisdictions around the world, underpinned by the reality that data flows are at the center of daily lives during the pandemic with remote work, school, global conferences and everything else – the field of international data transfers is more important than ever. Because, as Gencarelli noted, “it is also generally recognized that protection should travel with the data”.
Latin America and Asia Pacific, the “real laboratories” of new data protection rules
Gencarelli then observed that the conversation on international data flows has become much more “global and diverse”, technically shifting from the “traditional transatlantic debate” to a truly global conversation. “We are seeing a shift to other areas of the world, such as Asia-Pacific and Latin America. This doesn’t mean that the transatlantic dimension is not a very important one, it’s actually a crucial one, but it is far from being the only one”, he said. These remarks come as the US Government and the European Commission have been negotiating for more than a year a framework for data transfers to replace the EU-US Privacy Shield, invalidated by the CJEU in July 2020.
In fact, according to Gencarelli, “Latin America and Asia-Pacific are today the real laboratories for new data protection rules, initiatives and solutions. This brings new opportunities to facilitate data flows with these regions, but also between those regions and the rest of the world”. The European Commission has recently concluded adequacy talks with South Korea, after having created the largest area of free data flows for the EU with Japan, two years ago.
“You will see more of that in the coming months and years, with other partners in Asia and Latin America”, he added, without specifying what jurisdictions are immediate in the adequacy pipeline. Earlier in the conference, Jonathan Mendoza, Secretary for Personal Data Protection at INAI, had mentioned that Mexico and Colombia are two of the countries in Latin America that have been engaging with the European Commission for adequacy.
However, before the European Commission officially communicates about advanced adequacy talks or renewal of pre-GDPR adequacy decisions, we will not know what those jurisdictions are. In an official Communication from 2017, “Exchanging and protecting personal data in a globalized world”, the Commission announced that, “depending on progress towards the modernization of its data protection laws”, India could be one of those countries, together with countries from Mercosur and countries from the “European neighborhood” (this could potentially refer to countries in the Balkans or the Southern and Eastern borders, like Moldova, Ukraine or Turkey, for example).
“Adequacy” of foreign jurisdictions as a ground to allow data to flow freely has become a standard for international data transfers gaining considerable traction beyond the EU in new legislative data protection frameworks (see, for instance, Articles 33 and 34 of Brazil’s LGPD, Article 34(1)(b) of the Indian Data Protection Bill with regard to transfers of sensitive data, or the plans recently announced by the Australian government to update the country’s Privacy Law, at p. 160). Even where adequacy is not expressly recognized as a ground for transfers, like in China’s Personal Information Protection Law (PIPL), the State still has an obligation to promote “mutual recognition of personal information protection rules, standards etc. with other countries, regions and international organizations”, as laid down in Article 12 of the PIPL.
However, as Gencarelli noted in his keynote, at least from the European Commission’s perspective, “beyond that bilateral dimension work, new opportunities have emerged”. He particularly mentioned “the role regional networks and regional organizations can play in developing international transfer tools.”
One example that he gave was the model clauses for international data transfers adopted by ASEAN this year, just before the European Commission adopted its new set of Standard Contractual Clauses under the GDPR: “We are building bridges between the two sets of model clauses. (…) Those two sets are not identical, they don’t need to be identical, but they are based on a number of common principles and safeguards. Making them talk to each other, building on that convergence can of course significantly facilitate the life of companies present in ASEAN and in the EU”.
The convergence of data protection standards and safeguards around the world “has reached a certain critical mass”, according to Gencarelli. This will lead to notable opportunities to cover more than two jurisdictions under some transfer tools: “[they] could cover entire regions of the world and on that aspect too you will see interesting initiatives soon with other regions of the world, for instance Latin America.
This new approach to transfers can really have a significant effect by covering two regions, a significant network effect to the benefit of citizens, who see that when the data are transferred to a certain region of the world, they are protected by a high and common level of protection, but also for businesses, since it will help them navigate between the requirements of different jurisdictions.”
Entering the world of high level intergovernmental organizations and international trade agreements
One of the significant features of the new landscape of international data transfers is that it has now entered the agenda of intergovernmental fora, like the G7 and G20, in an attempt to counter data localization tendencies and boost digital trade. “This is no longer only a state to state discussion. New players have emerged. (…) If you think of data protection and data flows, we see it at the top of the agenda of G7 and G20, but also regional networks of data protection authorities in Latin America, in Africa, in Europe”, Gencarelli noted.
One particular initiative in this regard, spearheaded by Japan, was extensively explored by Mieko Tanno, the Chairperson of Japan’s Personal Information Protection Commission (PIPC) in her keynote address at the GPA: the Data Free Flow with Trust initiative. “The legal systems related to data flows (…) differ from country to country reflecting their history, national characteristics and political systems. Given that there is no global data governance discipline, policy coordination in these areas is essential for free flow of data across borders. With that in mind, Japan proposed the idea of data free flow with trust at the World Economic Forum annual meeting in 2019. It was endorsed by the world leaders of the G20 Osaka summit in the same year and we are currently making efforts in realizing the concept of DFFT”, Tanno explained.
A key characteristic of the DFFT initiative, though, is that it emulates existing legal frameworks in participating jurisdictions and does not seem to propose the creation of new solutions that would enhance the protection of personal data in cross-border processing and the trust needed to allow free flows of data. Two days after the GPA conference took place, the G7 group adopted a set of Digital Trade Principles during their meeting in London, including a section dedicated to “Data Free Flow with Trust”, which confirms this approach.
For instance, the DFFT initiative specifically outsources to the OECD solving the thorny issue of appropriate safeguards for government access to personal data held by private companies, which underpins both the first and second invalidation by the CJEU of an adequacy decision issued by the European Commission for a self-regulatory privacy framework adopted by the US. While the OECD efforts in this respect hit a roadblock during this summer, the GPA managed to adopt a resolution during the Closed Session of the conference on Government Access to Personal Data held by the Private Sector for National Security and Public Safety Purposes, which includes substantial principles like transparency, proportionality, independent oversight and judicial redress.
However, one interesting idea surfaced among the proposals related to DFFT that the PIPC promotes for further consideration in these intergovernmental fora, according to Mieko Tanno: the introduction of a global corporate certification system. No further details about this idea were shared at the GPA, but since the DFFT initiative will continue to make its way through agendas of international fora, we might find out more information soon.
One final layer of complexity added to the international data transfers debate is the intertwining of data flows with international trade agreements. In his keynote, Bruno Gencarelli spoke of “synergies that can be created between trade instruments on the one hand and data protection mechanisms on the other hand”, and promoted breaking down silos between the two as being very important. This is already happening to a certain degree, as shown by the Chart annexed to this G20 Insights policy brief, on “provisions in recent trade agreements addressing privacy for personal data and consumer protection”.
An essential question to consider for this approach is, as pointed out by Dr. Clarisse Girot, Director of FPF Asia-Pacific, when reviewing this piece, “how far can we build trust with trade agreements?”. Usually, trade agreements “guarantee an openness that is appropriate to the pre-existing level of trust”, as noted in the G20 Insights policy brief.
EU will seek a mandate to negotiate international agreements for data protection enforcement cooperation
Enforcement cooperation for the application of data protection rules in cross-border cases is one of the key areas that requires significant improvement, according to Bruno Gencarelli: “When you have a major data breach or a major compliance issue, it simultaneously affects several jurisdictions, hundreds of thousands, millions of users. It makes sense that the regulators who are investigating at the same time the same compliance issues should be able to effectively cooperate. It also makes sense because most of the new modernized privacy laws have a so-called extraterritorial effect”.
Gencarelli also noted that the lack of effectiveness of current arrangements for enforcement cooperation for privacy and data protection law surfaces especially when it is compared to other regulatory areas, like competition and financial supervision. In those areas, enforcers have binding tools that allow “cooperation on the ground, exchange of information in real time, providing mutual assistance to each other, carrying out joint investigations”.
In this sense, the European Union has plans to create such a binding toolbox for regulators. “The EU will, in the context of the implementation of the GDPR, seek a mandate to negotiate such agreements with a number of international partners”, announced Bruno Gencarelli in his keynote address.
The more than 130 privacy and supervisory authorities from around the world that are members of the GPA are very keen on enhancing and permanentalizing their cooperation, both in policy matters and enforcement, as is evident from the Resolution on the Assembly’s Strategic Direction for 2021-2023 adopted by the GPA during this year’s Conference, under the leadership of Elizabeth Denham and her team at the UK’s Information Commissioner’s Office. This two-year Strategy proposes concrete action, such as “building skills and capacity among members, particularly in relation to enforcement strategies, investigation processes, cooperation in practice and breach assessment”. The binding toolbox for enforcement cooperation that the EU might promote internationally will without a doubt boost these initiatives.
In a sign that, indeed, the data protection and privacy debate is increasingly vibrant outside traditional geographies for this field, Mexico’s INAI was voted as the next Chair of the Executive Committee of the GPA and entrusted to carry out the GPA’s Strategy for the next two years.
Video recordings of all Keynote sessions at this year’s GPA Annual Conference are available On Demand on the Conference’s platform for the attendees that had registered for the event.
FPF Files Comments on CPRA Initial Rulemaking
Yesterday, the Future of Privacy Forum filed comments with the California Privacy Protection Agency on the initial rulemaking under the California Privacy Rights Act (CPRA). The CPRA, which comes into effect in 2023, provides protections for sensitive personal information, expands the California Consumer Privacy Act’s opt-out rights, and requires businesses to provide mechanisms for individuals to access, correct, and delete data.
FPF offered resources and recommendations regarding automated decisionmaking, sensitive personal information, global opt-out signals, and de-identification. Among our comments, we suggest that regulations under the CPRA should:
Establish guidelines for automated decisionmaking (ADM) that produces “legal or similarly significant effects.”
Provide that information about “automated decisionmaking” follow NIST interpretability guidelines, and be meaningful and reasonably understandable to the average consumer.
Clarify a range of potential use cases for health and wellness data, by providing a principled, exemplar list of categories that are in or out of scope. In many cases, such distinctions will be based on context and reasonable use.
Ensure opportunities for socially beneficial commercial research using sensitive personal information.
Clarify the role of global opt-out signals in the context of today’s labyrinth of existing permission frameworks, including in authenticated and non-authenticated platforms.
Establish an open process for authoritative approval of new global opt-out signals that meet the technical specifications of the Agency over time.
Seek further input from de-identification experts and researchers to clarify key implementation issues for “deidentified data,” including the role of technical, legal, and administrative controls, and Privacy Enhancing Technologies (PETs).
Event Report from DigitalxADB: Driving Digital Development across Asia and the Pacific
On October 27, the Future of Privacy Forum (FPF)’s Asia-Pacific office and the Asian Development Bank (ADB) co-hosted an online event titled, “Trade Offs or Synergies? Data Privacy and Protection as an Engine of Data Driven Innovation” in the context of DigitalxADB. This edition was the third in ADB’s series of annual knowledge-sharing events for representatives of ADB’s 68 member countries and external partners to learn about and take part in efforts to further integrate “digital” into ADB.
1. Background
By way of a background, ADB was conceived in the early 1960s as a financial institution that would be Asian in character and foster economic growth and cooperation in one of the poorest regions in the world. Despite the region’s many successes, it remains home to a large share of the world’s poor: 263 million living on less than US$1.90 a day and 1.1 billion on less than US$3.20 a day. ADB assists its members, and partners, by providing loans, technical assistance, grants, and equity investments to promote social and economic development. ADB maximizes the development impact of its assistance by facilitating policy dialogues, providing advisory services, and mobilizing financial resources through co-financing operations that tap official, commercial, and export credit sources.
For FPF, the co-organization of this digital policy dialogue with an international organization as important in the region as the ADB was an opportunity to manifest its intention to be useful to the data protection and privacy community in Asia through a large variety of means. FPF Asia-Pacific sees its role as a platform for cooperation that is both expert and neutral capable of supporting all kinds of actions that can contribute to the development of best practices in data protection and privacy, to help bridge the gaps between law and practice, and advance thought leadership and support coherent policy development in this area. Such cooperation must involve a wide variety of stakeholders, whether from the public or private sectors, national or regional, where appropriate in partnership with international organizations.
2. Key takeaways
This event consisted of two panel discussions.
The first, titled “Industry Expectations and Cooperation with Privacy Regulators in Asia,” was moderated by Yoonee Jeong (Senior Digital Specialist, ADB) and attended by panelists Marcus Bartley-Johns (Asia Regional Director, Government Affairs and Public Policy, Microsoft), Yen Vu (Principal and Country Manager, Rouse Vietnam), and Royce Wee (Director, Head of Global Public Policy, Alibaba Group).
The second, titled “To Be or to Become a Privacy Regulator in Asia in the 2020s: What Challenges, What Role for International Cooperation?” was moderated by Dr. Clarisse Girot (Director for Asia Pacific, FPF) and attended by panelists Michael McEvoy (Information and Privacy Commissioner, British Columbia, Canada, and Chair, Asia Pacific Privacy Authorities Forum – APPA), Zee Kin Yeong (Assistant Chief Executive, Infocomm Media Development Agency—IMDA, and Deputy Commissioner, Personal Data Protection Commission – PDPC, Singapore), and Prof Thitirat Thipsamritkul (Faculty of Law, Thammasat University, and Vice President of the Digital Council of Thailand).
This post summarizes the discussions in these two stellar panels and highlights key takeaways:
There is growing momentum for data protection and privacy in Asia. In 2020/21 alone,Singapore, Japan, South Korea, New Zealand, China and Thailand have upgraded or passed their data protection laws, while Brunei, India, Indonesia, Vietnam, and Sri Lanka among others move closer to adopting data protection frameworks of their own. Panellists Yen Vu and Thitirat Thipsamritkul shared first-hand experiences with development of data protection legislation in Vietnam and Thailand, respectively, while Zee Kin Yeong and Michael McEvoy shared their national and international experience as seasoned regulators in Singapore and British Columbia, Canada.
A key consideration for data protection law in Asia is finding the right balance between convergence with global standards and adaptation to local conditions. As more data protection laws in Asia tend to be developed with reference to frameworks and policies from outside Asia, policymakers in Asia must find a way to integrate data protection and privacy principles with Asia’s unique histories, cultures, and values to ensure that data protection laws win support from both businesses and citizens.
Data protection and privacy laws are most effective when made and implemented in partnership with businesses, industry associations, and civil society, as well as data protection regulators. Regulators and organisations can each learn important lessons from one another and, together with other key stakeholders, collaborate on tackling shared challenges and taking advantage of shared opportunities in the digital economy.
It is fundamental to support the development of the community of data protection regulators in Asia, whether through actions to support the development of national regulators, or regional cooperation networks such as they are developing, in this region as elsewhere. Based on experience, the top priority of regulators must be placed on education of businesses, government, and citizens, and equipping them with the right knowledge, tools and capabilities to ensure the effectiveness of the data protection law.
Trust, transparency, and accountability are key for businesses operating in Asia. Panellist Marcus Bartley-Johns related how Microsoft has come to recognize that Asian consumers, especially young people, are privacy-conscious and eager to understand how companies use their data. Similarly, panellist Royce Wee explained how trust is a key ingredient for a secure, inclusive, and sustainable digital economy, and increasing trust and transparency can create a win-win situation for consumers and businesses alike. In this regard, data protection laws play an important role to foster that trust.
What challenges to address, and what roles for ADB and FPF?
Thomas F. Abell (Advisor, SDCC and Chief of Digital Technology for Development, ADB) gave the introductory speech to the event and shared his insights into how the COVID-19 pandemic had accelerated the digital economy in Asia Pacific as the region increasingly relies on “digital.” 2020 was a record year in terms of member governments’ demand for ADB’s digital development programmes – roughly 20% of ADB’s projects in 2020 involved a significant digital component. Going forward, ABD is looking to increase support for its member governments in this area, from working on digital programs and security, to seeking thought leaders to drive digital development initiatives, to launching a new program in data analytics early next year.
Dr. Clarisse Girot(Director for Asia Pacific, FPF) explained how global activities have taken on an increasingly important dimension in FPF’s work, with the development of regional offices in Europe, Israel, and most recently, Asia with the recent launch of FPF’s Asia Pacific Office in Singapore. In Asia Pacific, an essential mode of action will be to forge partnerships, run joint events, and bring together businesses, citizens, and international organisations to support governments and regulators in their efforts to adopt laws and policies that address growing privacy expectations, raise the level of data protection, and ultimately, support economic growth and digitalisation in the region, especially in the wake of COVID-19.
From this point of view, the ambitions of FPF and ADB on these issues are completely complementary. This event is an opportunity to explore with the panelists what could be their priority actions in this area, if necessary joint actions.
Dr. Girot further highlighted the tension between Asia’s status as not only the most populous but also most economically dynamic region in the world and the fact that data protection laws, for historical more than for political reasons, tend to be developed with reference to instruments, frameworks, and policies that have been designed and developed elsewhere – the EU’s General Data Protection Regulation (GDPR) being a case in point. Dr. Girot stressed the need to ensure that national frameworks are compatible with global standards that are necessary in a world where data flows are ubiquitous and underlie the digital economy.
But more prosaically, there is also a need to address challenges that have blocked adoption of data protection and privacy laws in some jurisdictions where they have been announced as “imminent” for several years. Passing a data protection law is not easy, even less today than in the past. A major challenge in Asia is how to articulate data protection laws with the “geopolitically loaded” concept of “data sovereignty” – a concept which has taken root specifically in China and India and looks to spread elsewhere. Another blocking factor is the legitimate concern that data protection and privacy laws would impose administrative constraints and compliance costs for local businesses, thereby restricting innovation and blocking trade. As well, baseline data protection laws intersect with sectoral laws, so that a lot of finetuning is required. Defining the material scope of the law is not easy. Such fear also extends to the decision whether to institute a data protection and privacy regulator and provide it with powers and control over governments, among others.
To address these challenges, regional and international cooperation, and cooperation between the public and private sectors, academia and civil society, is essential. Events like DigitalxADB are thus an opportunity to demonstrate the wealth of resources that international cooperation brings. They also help to identify the multiple ways in which both public and private actors, including FPF and ADB, can contribute by providing support for governments and regulators in Asia to tackle these challenges—be it financial, material, or “intellectual”.
The two panel discussions were set up to approach these subjects from two complementary angles.
Panel 1: “Industry Expectations and Cooperation with Data Protection and Privacy Regulators in Asia”
This first panel moderated by Yoonee Jeong was comprised of industry representatives from different backgrounds, who share the same difficulties in complying with fluctuating and variable data protection rules in the region. During the conversation, each panelist was asked how they envision that ADB or FPF could usefully contribute to addressing these challenges.
Below is a synthesis of the main comments made by each panelist in the course of the conversation.
Marcus Bartley-Johns(Asia Regional Director, Government Affairs and Public Policy, Microsoft) opened his comments by lauding the efforts by ADB and FPF for coming together to convene this dialogue, and underlining the great value which lies in the combination of ADB’s unique convening power and ability to work with countries across the region on these issues, and FPF’ capacity to share expertise globally on what’s happening in privacy regulation and a lot of deep connections with the privacy community across Asia. He went on to share two key insights from Microsoft’s view of data protection and privacy issues around the Asia Pacific region.
The first is that privacy is essential for both organisations and individuals across Asia, and therefore, effective privacy regulation is central to growth of the digital economy across Asia. In this respect, Microsoft and research firm IDC conducted a surveyof the perceptions and expectations of trust in digital services of more than 6000 consumers in this region in 2019. 53% of those consumers reported feeling that their personal privacy had been compromised or that their trust had been breached when using digital services. A higher share of respondents who reported negative experiences were young people. This challenges the oft-held assumption that because young people – especially in Asia – are high consumers of digital services, they do not care about privacy. A further example is that of the 19 million unique visitors to Microsoft’s privacy dashboard in 2020, Australia, China, Japan, Korea, and India were all in the top 20 countries of visitors who came to view, export, or delete their data.
The second is that opportunities for collaboration on data protection and privacy abound. Organisations like FPF and ADB (among other stakeholders) can play a key role in developing privacy regulation through providing resources and technical assistance to countries that are thinking about privacy regulation and consultation to countries that are drafting new privacy regulations or amending their existing regulations. In particular, regulation needs to be technology-neutral as there is a temptation among regulators in Asia to look for an easy technical fix – such as contractual terms – to demonstrate privacy protection.
There are also opportunities for regional cooperation to counter the trend of countries working in “silos,” leading to a fragmented regulatory framework that will not support trade and investment and will increase costs for local companies – especially Small and Medium Enterprises (SMEs), which unlike large multinational companies (MNCs) cannot invest significant funds and employ hundreds of full-time engineers to transform their data management. In this regard, Singapore has been instrumental in driving greater regulatory coherence in ASEAN. More work on interoperability is needed to ensure that compliance will be as straightforward as possible for SMEs while still keeping a high bar for privacy protection cross the region’s regulatory landscape.
Yen Vu(Principal and Country Manager, Rouse Vietnam) shared the experiences of Vietnam as the country developed its first personal data protection decree, which she hopes will be passed and take effect by the end of this year.
Despite facing technological, economic, and societal challenges, Southeast Asia has an opportunity to become a digital economy hub for Asia. For example, even as large parts of Vietnam were under strict lockdown due to COVID-19, its Internet-based economy still reported growth in transportation, food, e-commerce, and fintech. The challenges come from an ever-shifting regulatory environment in both Vietnam and the region, as well as the need for training and awareness-building for both the public and private sectors.
In 2020, Vietnam became one of the first countries internationally to announce a programme for national digital transformation. Data protection will be key to this digital transformation programme, which aims to develop digital government, economy, and society and to equip Vietnamese digital businesses with global capacity in key areas – including healthcare, education, finance, banking, agriculture, transportation, energy, natural resources, the environment, and industrial protection – over the next decade.
However, the situation on the ground is one of regulatory fragmentation as Vietnam still lacks an omnibus law on data privacy. This has caused confusion and poses challenges for business across all sectors, which must often seek guidance from the government on how to comply with requirements under security laws, such as data localization. There are opportunities for international organisations like FPF and ADB to support Vietnam, especially through capacity-building activities for both the public and private sectors.
Royce Wee(Director, Head of Global Public Policy, Alibaba Group) highlighted that now is a very interesting time to be in Asia because more and more Asian countries are coming up with data protection laws. Thailand recently joined Singapore, the Philippines, and Malaysia as jurisdictions which already have data protection laws in place, and Brunei, Indonesia, Vietnam, and India move closer to adopting new data protection laws. China is also a major mover in this space, having passed a trio of data-related laws in a short time – the Cybersecurity Law, Data Security Law, and most recently, the Personal Information Protection Law (PIPL) which came into effect at the start of November 2021.
These data protection laws are not homogeneous but rather, reflect each country’s philosophies, outlooks, and values as well as its unique needs and circumstances. Data protection is not a solely European construct, and each country has to strike a balance between individual rights and control on the one hand and reasonable/legitimate business needs on the other hand.
This can create significant challenges for MNCs like Alibaba Group, whose compliance policies must be localized to meet each jurisdiction’s standards and requirements. In this respect, MNCs typically adopt a “high watermark” say set by the EU GDPR as a starting point and then make adjustments based on specificities in local data protection laws.
However, this is only a narrow view of data protection. Trust remains an overarching objective for these laws and is a key ingredient for a secure, inclusive, and sustainable digital economy. For organisations, trust helps to build long-term relationships with customers in which customers will be more willing to provide more and better-quality data, and organisations will be better placed to provide high-quality services and value-for-money products to meet their customers’ needs.
For regulators, trust in the digital economy allows for greater economic development and dynamism and can help to bridge the digital divide, opening the digital economy to greater participation from all segments of society while also creating better jobs with higher incomes by matching skills and demand and enabling better policy implementation.
The road to trust is one of constant, iterative improvement because – due to fast-paced changes in technology, business models, consumer expectations, and even societal values – the journey never really has an end in sight.
Regulators play an important role in pushing businesses to do more and to do better in a spirit of partnership and goodwill, rather than adversity. At the same time, businesses play an important role in uplifting data protection standards across the board. While MNCs have an important signalling effect, the real power to “move the needle” for data protection standards and processes comes from SMEs as they represent the vast majority of businesses in Asia. Regulators can do a lot to bring SMEs on board by issuing guidelines, providing clarity on their regulatory intent, and supplying tools and technological solutions. For example, in Singapore, the Infocomm Media Development Authority (IMDA) come up with “tech packs” containing solutions that SMEs can easily adopt and adapt to meet their business needs while ensuring at least a minimum baseline data.
Cooperative partnership between regulators and businesses is a prerequisite to develop the right culture of data accountability for organizations. Regulators should explain their regulatory objectives, concerns, and priorities but also understand the constraints and limitations in businesses’ daily operations. Similarly, businesses should understand these regulatory objectives, concerns, and priorities, but also provide feedback as part of the consultation process before new laws are passed, to ensure that the laws are practical and effective and that businesses can comply with them. For example, if left to their own devices, some regulators in Asia have a strong tendency to include data localisation into their laws. However, as the digital economy is essentially borderless, this can harm cross-border data flows necessary for e-commerce and the adoption of cloud solutions.
International organisations like FPF and ADB can, through their thought leadership and convening power, play an important role by contributing to the law-making process, especially through innovative projects such as sandboxing schemes, exploring different models for data processing, innovation, and even valuation, promoting harmonisation of baseline global principles and standards for data protection, to work with/across regulators and businesses to create mechanisms to allow/facilitate greater trusted and secure border data flows, and promoting discussion and agreement on an ethical framework for data processing that includes emerging technologies such as artificial intelligence, machine learning, and the Internet of Things.
By sharing resources and expertise, regulators and businesses can build trust and solve common problems and achieve common objectives – from improving the transparency of data processing, to putting in place adequate security standards and agreeing on common criteria/list of reasonable and legitimate uses of personal data, to reskilling and upskilling workers for new jobs in the digital economy.
Panel 2: “To Be or to Become a Privacy Regulator in Asia in the 2020s: What Challenges, What Role for International Cooperation?”
This second panel moderated by Dr. Clarisse Girot was comprised of two data protection regulators (Yeong Zee Kin and Michael McEvoy) and of an expert involved in the lawmaking process in Thailand (Prof Thitirat Thipsamritkul). Below is a synthesis of the main comments made by each panellist in the course of the conversation.
Professor Thitirat Thipsamritkul(Faculty of Law, Thammasat University, Vice President of the Digital Council of Thailand) shared her experience with the development of a draft personal data protection law in Thailand, which was ultimately passed in 2019.
Historically, data protection and privacy had been seen as a side issue which was not as essential to Thailand’s digital economy as, for example, cybercrime, cybersecurity, and intellectual property law. Little by little, privacy law became more central to the discussion with the emergence of the EU GDPR and efforts by the public sector, academia, and civil society to bring privacy into legislative discussions around the digital economy. By 2019, with the passage of the Cybersecurity Law, the zeitgeist was that if Thailand needed a cybersecurity law, then it also needed a data protection law.
The legislative process for the resultant Personal Data Protection Act (PDPA) was unique in that it involved extensive collaboration between the public and private sectors, academia, and civil society. In particular, academia was instrumental in shaping the PDPA as it had already created “shadow regulation” in the form of the Thailand Data Protection Guidelines (TDPG) to help Thai companies to comply with the EU GDPR and do business with Europe. The Guidelines were widely used by Thai businesses and drew not only on international standards but also input from local businesses and organisations on the practicality of data protection measures. Even after the PDPA was passed, the Guidelines remained influential for businesses designing their compliance schemes.
Thai society is now ready to comply with the PDPA but has been occupied with the response to COVID-19 for the last year. Due to resistance to the PDPA from certain sectors of the economy, the Thai government postponed the PDPA’s entry into effect twice. There is generally a fear that the PDPA gives too wide a discretion to regulators and the courts and that the courts’ interpretation would be uncertain as the PDPA introduces an entirely new framework into Thai law, also because due to stringent provisions on criminal liability for breach of the Act.
The postponements have sparked a debate in Thailand as to whether privacy laws should be strengthened or whether the compliance burden should be reduced as a result of the pandemic. However, at the same time, many businesses, including those in the financial and health insurance sectors, have been declaring new privacy protective measures and policies even before the PDPA takes effect.
On a broader note, many of the data practices in Asia differ significantly from those in Europe or America – for example, Asia has a lot of online shopping livestreams, which are much less common in Europe and Asia. This means that each region must adopt different methods for implementing data protection and privacy principles, even if these core principles remain the same around the world. However, a shared problem for regulators around the world is capacity-building – this is where international cooperation can be most effective.
Zee Kin Yeong(Deputy Commissioner, PDPC, Singapore) started with a word of encouragement for Thailand and explained that even Singapore’s journey to enacting data protection legislation started with a voluntary, industry-created model code, which was introduced in 2001-2002 – a decade before Singapore enacted its own PDPA in 2012. This was a necessary and helpful step to full legislation as local online businesses voluntarily adopted the code and began to prepare for full data protection legislation.
Yeong Zee Kin had three areas of advice for governments and policymakers who are data protection and privacy:
the necessity for convergence with global norms when designing laws;
equipping businesses and companies with practical tools to implement the principles within their organisations; and
valuing partnerships with the data protection community and data protection officers, who can act as champions to help to build the data protection ecosystem.
On convergence with global norms, he stressed that nowadays, data “can’t be kept in a bottle” as it flows everywhere – both within and between economies around the globe – especially as companies operate in multiple jurisdictions. Therefore, it is essential to design laws to adhere to accepted global principles to the greatest extent possible because such familiarity is important from the perspectives of both compliance and the expectation of consumers and data subjects. An example of such a global principle is the admonition against localisation of computing facilities. Other relevant global principles can be found in the OECD Privacy Guidelines, the APEC Privacy Principles, and for Southeast Asia, the ASEAN Principles for Data Protection, as well as free-trade agreements like the CPTPP and RCEP.
At the same time, it is also necessary to adapt laws to local conditions – society, culture, and history. The recent amendments to Singapore’s PDPA, which were passed a year ago, illustrate the importance of convergence as well as adaptation to local conditions. In the amendments, Singapore adopted the concept of “legitimate interests” because it had become common in multiple data protection regimes worldwide. However, Singapore also recognized that its local businesses wanted clarity and found a concept as broad and generous as legitimate interests difficult to work with. In implementing the concept, Singapore therefore took a slightly different approach to other regimes and listed out specific examples of legitimate interests in the Schedule to the PDPA. Singapore also took the unique step of creating a “business improvement exception” based on suggestions by local companies but still required express consent, rather than legitimate interests or business improvement, for direct marketing based on feedback from local consumers.
Between convergence with global norms and adaptation to local conditions, we will probably see more regional groupings in data protection laws as factors like geographical proximity heavily influence culture and history, which in turn influence expectations of and approaches to data protection. We should encourage these regional groupings and cooperation – if regulators and policymakers can come together and find a common level, then we might end up with three or four regional groupings, which could then start building bridges between regions to encourage global consistency and convergence.
On equipping businesses with practical tools, Yeon Zee Kin recommended that regulators place themselves in the shoes of local business owners and managers who would need to implement principles in legislation. Regulators can use the kinds of common business objectives that companies care about, such as inventory management, analysis of sales performance, and management of customer and HR records, as an entry point for discussing how data can be used to achieve those objectives while also embedding good data protection principles into the process. It is also important to recognize that businesses often need external help. To that end, Singapore’s PDPC curated a brief list of core data protection practices and provided a list of outsourced data-protection-as a-service providers who could help business owners and managers with compliance.
Michael McEvoy(Information and Privacy Commissioner, British Columbia, and Chair, APPA Forum) agreed that there are many examples in which jurisdictions go through a transition period from having voluntary standards, guidelines, and principles to having full data protection legislation but added that in some cases, legislation may be a result of pressure from civil society, a shift to a more reformist government, or even simply a fluke of circumstances. However, even where the legislation seems to come to fruition suddenly, it is usually the product of many years of work and efforts to educate legislators.
Voluntary industry efforts in British Columbia – such as data breach notification although there is not yet a legal obligation making notification mandatory in the province– can be the start of good practice as they create a culture and environment of compliance. In experience, businesses generally want to “do the right thing” but they might not be able to figure out how to do it. As well, it is also certain that there is a fear. Misplaced fear on the part of businesses about regulator enforcement powers and in general that regulators may not understand the nature of innovative businesses may delay the adoption of a complete regulatory framework in some jurisdictions. Enforcement is certainly important. But the solution to address such concerns is first and foremost for regulators to go out and educate businesses and in some cases, governments, on what the “right thing” is and provide guidance, education, and assistance.
As personal data follows the flow of trade, more and more countries are waking up to the need for effective, sustainable, and trustworthy regulation for an increasingly digital world. This idea underpins the work of the Asia Pacific Privacy Authorities (APPA) Forum over the past 30 years to nurture and promote data protection in the Asia Pacific region. While initially there was not a lot of interest in APPA’s work, there definitely is now: British Columbia, which is home to APPA’s Secretariat and does approximately $14 billion worth of export trade with Pacific Rim countries, has come to recognize the importance of data protection to digital trade, and its legislature now supports APPA financially.
APPA’s 19 members – all data protection regulators in Asia Pacific – assist one another and share information and techniques to enhance their regulatory expertise. APPA has also extended a hand to other jurisdictions outside this region, such as recently in the Cayman Islands. Countries that are now considering implementing new data protection laws, where none previously existed, are fortunate in that these countries can learn from the experiences of jurisdictions that have gone through this process, adapting what is useful and avoiding the regulatory missteps that unfortunately happen from time to time. No two countries’ data protection laws will ever be identical because country is informed by its own history and culture. However, countries across the globe share a commitment to have at least some commonality, especially in allowing data to flow more freely and securely. In this respect, the GDPR and the concept of adequacy have been very helpful in the search for common ground and convergence on principles for protecting citizens’ data while encouraging trade, innovation, and flow of data.
The session thus ended on a very encouraging note.
To conclude, ADB and FPF thanked the speakers and announced that they would consider joint actions to support positive data protection developments in the region, in the spirit of cooperation which animated the whole of this session.
This blog was written with the support of the Global Privacy team of the Future of Privacy Forum.
Data Sharing … By Any Other Name
Data Sharing … By Any Other Name(Would Still Be a Complex, Multi-stakeholder Situation)
“It is widely agreed that (data) should be shared, but deciding when and with whom raises questions that are sometimes difficult to answer.”[1]
Data sets held by commercial and government organizations are an increasingly necessary and valuable resource for researchers. Such data may become the evidence in evidence based policymaking[2] or the data used to train artificial intelligence.[3] Some large data sets are controlled by government agencies, non-governmental organizations or academic institutions, but many are being accumulated within the private sector. Academic researchers value access to this data as a way to measure any number of consumer, commercial, and scientific questions at a scale they are unable to reach using conventional research data gathering techniques alone. Such data allows researchers access to information that allows them to answer questions on topics ranging from bias in targeted advertising, to the influence of misinformation on election outcomes, to early diagnosis of diseases through use of health and physiological data collected by fitness and health apps.
Recent attention on platform data sharing for research is only one conversation in the cacophony of cross-talk on data sharing. There are many different uses of the term “data sharing” to describe a relationship between parties who share data from one organization to another organization for a new purpose. Some uses of the term data sharing are related to academic and scientific research purposes, and some are related to transfer of data for commercial or government purposes. In this moment, where various types of data sharing are a concern elevated even to the attention of the US Congress and the European Commission[5], it is imperative that we are more precise which forms of sharing we are referencing so that the interests of the parties are adequately considered, and the various risks and benefits are appropriately contextualized and managed. In the table at bottom, we outline a taxonomy for the multiplicity of data sharing relationships.
Ultimately, the relationships between these entities are complex. In many cases, the relationship is 1-to-many, with a single agency or corporation sharing data with multiple researchers and civil society organizations or, as in the case with data trusts or data donation platforms, potentially one person sharing data with many research or commercial organizations through a trusted, intermediate, steward.[6] Likewise, researchers and civil society organizations may concurrently pursue data from multiple corporate or government organizations, in many cases for the ability to address those challenges that require extremely large quantities of data (Big Data) or complex networks of related data. This data flow is never just along a single channel, nor does it often stop after a single transfer. Governments and corporations share data with researchers; researchers return that data, generate new data, and share analysis and new questions and outcomes back around.
Managing these complex relationships requires multi-layered contracts, defined procedures, accountability mechanisms, and other technical and policy controls. The terms for data sharing cover obligations that both parties have, including privacy, ethics, governance, and other good stewardship protocols. Changes in the legislative landscape around data protection, privacy, and security mean that these relationships must adjust periodically to meet legal compliance obligations, on the data sharing or data using side.
At the Future of Privacy Forum, we are working to add context, nuance, and a considered evaluation of the needs of these many players to create guidelines and best practices to support data sharing, particularly for conduct of scientific and evidence-based policy research. What data is shared, under what conditions, controls, contracts, and use environments all have important privacy and governance implications. We have been actively working in this area since 2015, and continue to engage with various interested organizations around the challenges in today’s digital environment. With respect to the sharing of data itself, FPF is focused on finding ways to incorporate proportionate precautions so that any sharing activities adequately protect privacy and are designed with the full understanding of potential harms to the people whose data is transferred or the communities of which they are a part.
Data Sharing Relationships
Data Sharing Organization Type
Data Using Organization Type
Outcome of Data Sharing
Terms to Describe Data Sharing Relationship
Government Agencies
Researchers and Research Institutions
Researchers conduct evidence based evaluations of public programs
Researchers whose work is sponsored by corporations or who have privileged access to corporate data assets return data gathered for future corporate research and, in many cases, retain copies of that data for future scientific work
Researchers whose work is sponsored by or conducted under a government contract return data gathered for future agency research and, in many cases, retain copies of that data for future scientific work
Citizen groups, journalists, and communities of interest (e.g., patient advocacy groups) can gain access to data about themselves gathered during the research process so that they can use it for future treatment, advocacy, or research participation
Return of research data and/or research results[18]
Researchers and Research Institutions
Researchers and Research Institutions
Researchers can reuse other researchers’ data or combine their primary and others’ secondary data to answer novel questions without having to put people at risk of research harms by conducting further research with them
Archives can collect the primary data from multiple researcher to streamline the process of acquiring data to answer novel questions by re-examining data and not putting people at risk for research related harms by conducting further research with them
Data Stewardship bodies, such as Data Trusts or Data Donation Platforms
Researchers and Research Institutions; Government Agencies, Private Companies or Corporations
Individuals and groups share their data with others according to their interests as specified to and protected by a trusted, fiduciary, actor.
Data Trusts, Data Donation
[1] HHS Office of Research Integrity, ORI Introduction to RCR. https://ori.hhs.gov/content/Chapter-6-Data-Management-Practices-Data-sharing
[2] H.R.4174 – 115th Congress (2017-2018): Foundations for Evidence-Based Policymaking Act of 2018. (2019, January 14). https://www.congress.gov/bill/115th-congress/house-bill/4174
[3] “The Biden Administration Launches the National Artificial Intelligence Research Resource Task Force”. https://www.whitehouse.gov/ostp/news-updates/2021/06/10/the-biden-administration-launches-the-national-artificial-intelligence-research-resource-task-force/
[4] Goroff, Daniel, Jules Polonetsky, and Omer Tene. (2018). Privacy Protective Research: Facilitating Ethically Responsible Access to Administrative Data. The Annals of Political and Social Science, Vol 675, Issue 1, pp. 46-66. https://doi.org/10.1177/0002716217742605.
Harris, Leslie and Chinmayi Sharma. (2017). Understanding Corporate Data Sharing Decisions: Practices, Challenges, And Opportunities for Sharing Corporate Data with Researchers. Future of Privacy Forum. https://fpf.org/wp-content/uploads/2017/11/FPF_Data_Sharing_Report_FINAL.pdf.
[5] European Commission. (2021). “A European Strategy for Data” https://digital-strategy.ec.europa.eu/en/policies/strategy-data
[6] Open Data Institute. (2020). “Data Trusts in 2020”. https://theodi.org/article/data-trusts-in-2020
Future of Privacy Forum Releases Student Monitoring Explainer
On October 27, FPF released a new infographic, “Understanding Student Monitoring,” depicting the variety of reasons why schools monitor student digital activities, what types of student data are being monitored, and how that data could be used. While student monitoring is not new, it has gained significant traction recently due to the shift to remote learning and the increase in school-managed devices being issued to students.
“Student monitoring has been happening for years, but too often families only learn about it after their child has been flagged or they’ve read something in the news. And that lack of transparency creates questions and confusion about how exactly it works, and what is – and is not – being monitored,” said Amelia Vance, FPF’s Vice President of Youth and Education Privacy. “We hope that this infographic will help parents, students, educators, policymakers, and other stakeholders understand generally how student monitoring works and what it aims to do, and ultimately become empowered to ask questions about the monitoring products being used in their own districts, as there is often considerable variation.”
The infographic depicts the main reasons why schools monitor student activity online—ensuring student safety, legal compliance, and addressing community concerns—and highlights two areas of frequent confusion: what types of student data are being monitored, and how that data could be used.
While school administrators work with their chosen service provider to set up a monitoring system that meets their school’s needs, student data can be collected in multiple ways, including from:
School-Issued Devices: any student data that travels through an internet connection, wired or wireless, on a school-managed device.
School-Managed Internet Connections: data from students’ online content or activities on school-managed internet connections, potentially including take-home internet hotspots.
School Apps & Accounts: student data from certain school-managed accounts, regardless of whether students access the accounts from personal devices or personal internet connections at home.
Monitoring systems analyze student data from these sources for potential concerning indicators, which are typically related to warning signs of self-harm, violence, bullying, vulgarity, pornography, or illegal behaviors. Some systems flag content for human review. From there, depending on the nature and severity of the flagged content and monitoring system in place, several actions could occur. The student could be sent a warning, the content could be blocked, or a designated school contact could be alerted. These actions are explored in further depth in FPF’s accompanying blog.
“Many school administrators, students, and families may be aware that monitoring systems seek to identify concerning indicators from students’ online activities, but there is often less understanding about what occurs once a system does flag concerning activity,” said Yasamin Sharifi, a Policy Fellow in FPF’s Youth and Education Privacy team. “FPF’s new infographic clarifies the analysis, actions and data retention that a monitoring system and school may perform. This understanding is crucial for any stakeholder seeking to comprehend the practical impacts of a student monitoring system.”
The Future is Open: The U.S. Turns to Open Banking
FPF is pleased to work with a broad set of stakeholders on concepts around privacy and open banking. For more information on our new Open Banking Working Group and related projects, please contact Jeremy Greenberg: [email protected].
Open banking is a concept that describes banks and other financial institutions, such as credit unions, providing rights to customers over their financial data, including the ability to access, share or port data to third parties for various services.
The inherent tensions found in open banking between privacy, competition, and data portability requirements mirror similar concerns across the spectrum of Big Data.
Current challenges to a widespread and healthy open banking ecosystem in the U.S. include a lack of harmonized rules and principles for maintaining strong privacy protections involving financial data and an absence of standardized technical architecture.
The Consumer Financial Protection Bureau (CFPB) will take the lead on facilitating open banking in the U.S. and crafting rules regarding data protection and security; the CFPB should consider lessons learned from international approaches.
Open banking proponents and policymakers should be mindful of the unique sensitivity of financial information and the complex data protection risks raised by increased sharing of banking data—even when sharing is directed by consumers.
Introduction
In July 2021, President Biden signed the Executive Order on Promoting Competition in the American Economy. The Executive Order takes a “whole of government approach” to enforcing antitrust laws across the economy, with clear implications for data protection and privacy. Notably, the order encourages the Consumer Financial Protection Bureau (CFPB) to consider crafting rules under section 1033 of the Dodd-Frank Act in support of open banking with the goal of making it easier for consumers to safely switch financial institutions and use novel and innovative financial products while maintaining privacy and security.
The Order’s callout signals that the Biden administration views open banking as an important initiative for promoting consumer choice, fostering competition, and protecting consumers’ privacy. The debate around open banking highlights tensions between privacy and competition along with a number of privacy flashpoints including: data portability, access, sharing, transparency, control, and interoperability.
Open Banking Can Provide New Rights and Benefits to Consumers and Help Spur Competition, But Technical and Privacy Challenges Remain
Open banking is a concept that describes banks and other financial institutions, such as credit unions, providing rights to customers over their financial data, including the ability to share data or permissions over their data with third parties for various services. These rights include the right to access their financial data, port their data and switch financial institutions, and grant permission to third parties to carry out transactions and provide financial services to best meet a customer’s needs. For example, individuals could grant access to their financial data to a third party to complete an automated payment or provide tailored financial planning advice based on a consumer’s individual finances or credit history. Proponents of open banking argue that another benefit is increased competition among financial institutions. Firms entering into the financial sector may offer novel services that spur competition across the industry.
One current challenge to a widespread and healthy open banking ecosystem in the U.S. is a lack of harmonized rules and principles for maintaining strong privacy protections involving financial data. As a result, some traditional banking institutions concerned with maintaining strong customer privacy might be hesitant to support an open banking ecosystem that lacks clear and strong privacy rules and principles that equal, or exceed, the current financial privacy and security protections afforded to consumers by regulations such as the Gramm-Leach-Bliley Act (GLBA) or the Fair Credit Reporting Act (FCRA).
Another roadblock to widespread and privacy-protective open banking is the need for standardized technical architecture—particularly interoperable APIs— to enable the safe portability of financial data. A standardized and interoperable API would allow third parties to carry out their services on behalf of customers without accessing certain personal information, such as various login credentials. In the absence of widely adopted secure APIs, third parties sometimes turn to screen scraping to perform services, while collecting customer login credentials and other personal information, leading to potential privacy and security risks such as exposing consumer information in the case of a data breach and consumer impersonation. While current industry efforts such as the Financial Data Exchange’s (FDX) API are underway, a lack of standardized rules and technical standards, such as machine readable file rules, can lead to less privacy-protective methods of third parties accessing data.
The CFPB Will Continue Taking the Lead on Facilitating Open Banking in the U.S, While Considering Lessons Learned from International Approaches
Prior to the Executive Order, the CFPB has taken some preliminary steps to promote safe open banking in the U.S. In 2017, the agency released a set of broad non-binding principlesintended “to help foster the development of innovative financial products and services, increase competition in financial markets, and empower consumers to take greater control over their financial lives.” Key areas of focus include: data access (enabling consumers to obtain financial information in a timely manner without being compelled to share account credentials with third parties); informed consent (in which consumers understand terms & conditions with the ability to readily revoke authorizations granted to third parties); payment authorization (in which third parties are required to obtain specific authorization for distinct activities); efficient and effective accountability mechanisms (incentivizing stakeholders to prevent, detect, and resolve unauthorized access, sharing, and payments); among several other areas.
The CFPB next weighed in on the issue in 2020 when it held the CFPB Symposium: Consumer Access to Financial Records where experts discussed many of the concepts highlighted in the agency’s principles. Following the symposium, in October 2020, the CFPB initiated an Advanced Notice of Proposed Rulemaking on consumer access to financial records and how the agency might develop rules for implementing section 1033 of the Dodd-Frank Act. This is the same rulemaking effort highlighted in the Executive Order. The agency sought comments on the costs and benefits of open banking, and how the agency might handle many of the data protection-related concepts outlined in its 2017 principles, including: access, control, privacy, security, and standard setting. The CFPB has not issued a final rule or concluded the rulemaking, but the agency recently listed data sharing in its current regulatory agenda. Other than the CFPB, The Federal Reserve, Federal Deposit Insurance Corporation (FDIC) and the Office of the Comptroller of the Currency (OCC), released a Proposed Interagency Guidance on Third Party Relationships: Risk Management, focusing on banks managing risks in their third-party relationships with fintech companies, vendors, and other affiliates. While other regulators are involved in this space, the CFPB appears poised to return to their rulemaking effort as a near-term priority for the agency.
While the U.S. is serious about responsibly regulating and setting standards for open banking, other international models are well down this path. In 2015, the EU released an updated Payment Services Directive (PSD2), which went into effect in 2018. PSD2 aims to promote competition, privacy, and data transfer between EU countries and institutions. However, some PSD2 requirements, such as rules around consent, can significantly differ from requirements found in the GDPR and other European laws, leading to a lack of harmonization and confusion for consumers, regulators, and financial institutions. Other leading open banking approaches include recent efforts in the UK, Australia, Brazil, Israel, India, Canada, Mexico, and others. The technical standards and requirements around open banking will likely have to be harmonized between different regimes to promote the international and cross-border nature of the global economy.
Open Banking Highlights Broader Questions about Data Portability, Competition, and Cross-Border Data Flows
While the Executive Order sends a trumpet blast to regulators, consumers, and financial stakeholders that open banking is a priority area for the current administration, many of the data protection themes at play are much broader than open banking and touch multiple industries. The inherent tensions found in open banking between privacy and competition—such as the need to keep data private and in trusted hands vs. new players obtaining access or control over data for various purposes–exists across the spectrum of Big Data. Further, open banking helps animate the current debate and recent interest around data portability requirements from agencies such as the FTC. Ultimately, the need for interoperable rules and technical measures are not only necessary for beneficial and safe open banking, but for other international and cross-border data exchanges.