FPF in 2021: Delivering Privacy Insights & Expert Analysis
With the last days of 2021 upon us, we wanted to take a moment to reflect on this exciting year that saw FPF expand its presence both domestically and around the globe, while producing engaging events, thought-provoking analysis, and insightful reports with real-world impact.
Growing Global Expertise
The scope of FPF’s international work continued to expand this year, as policymakers around the world are focused on ways to establish or improve privacy frameworks. More than 120 countries have now enacted a privacy or data protection law, and FPF both closely followed and advised upon significant developments in Asia, the European Union, and Latin America.
FPF saw its presence in Asia grow substantially this year with the opening of the FPF Asia-Pacific office, headed by Dr. Clarisse Girot. The FPF Asia-Pacific office will provide expertise in digital data flows and discuss emerging data protection issues in a way that is useful for regulators, policymakers, and data protection professionals. Along with the opening of the office, FPF also announced a partnership with the Asian Business Law Institute (ABLI) to support the convergence of data protection regulations and best privacy practices in the Asia-Pacific region. The Asia-Pacific office held several events in the months following its opening, including a virtual event during Singapore’s Personal Data Protection Week and an event co-hosted with the Asian Development Bank titled Trade-Offs or Synergies? Data Privacy and Protection as an Engine of Data-Driven Innovation.
Following the Indian government’s passage of regulations that placed strict rules for the removal of illegal content and automated scanning of online content, FPF published a review of the new rules and included relevant resources with more information. This year also saw FPF announce Malavika Raghavan as the new Senior Fellow for India. This appointment further expanded FPF’s reach in Asia to one of the key jurisdictions for the future of data protection and privacy law.
International data flows have been an important topic of discussion over the past year. Following the Schrems II decision in 2020, which had serious implications for data flows coming from the EU into the US, the FPF global team created a series of informative infographics that explains the complexity of international data flows in two distinct contexts: retail and education services.
Scholarship & Analysis on Impactful Topics
The core of FPF’s work remains focused on providing insightful, independent analysis on pressing privacy issues. 2021 saw FPF provide this important leadership through events, awards, projects, papers, and more, providing insights into issues such as academic data sharing, digital contact tracing technologies, and neurotechnologies.
For the second year, FPF recognized privacy-protective research collaboration between a company and researchers with the Award for Research Data Stewardship. The first winning project this year is a collaboration between Stanford Medicine researchers led by Tejaswini Mishra, Ph.D., Professor Michael Snyder, Ph.D., and medical wearable and digital biomarker company Empatica. The other team recognized is a collaboration between Google’s COVID-19 Mobility Reports and COVID-19 Aggregated Mobility Research Dataset projects, and researchers from multiple universities in the United States and around the globe. These projects demonstrated how privately-held data can be responsibly shared with academic researchers, supporting significant progress in medicine, public health, education, social science, and other fields.
FPF created a new Open Banking Working Group to discuss issues surrounding open banking. FPF has released several blog posts and hosted events on the topic, with more to come in the new year.
FPF offered resources and best practices for a variety of topics this year. In August, with support from the Robert Wood Johnson Foundation, we developed actionable guiding principles to bolster the responsible implementation of digital contact tracing technologies. The principles we laid out allow organizations implementing this technology to do so in a way that takes a responsible approach to how their technology collects, tracks, and shares personal information.
It is important to take steps to ensure equity in access to DCTT and understand the societal risks and tradeoffs that may accompany its implementation. Privacy leaders who understand these risks will be better able to bolster trust in this technology within their organizations.
To better assist organizations’ shared mobility data access and reduce privacy risks in their data-sharing process, FPF and SAE’s Mobility Data Collaborative (MDC) created a transportation-tailored privacy assessment that provides practical guidance for data from ride-hailing services, e-scooters, or bike-sharing programs.
“Micromobility services can play a key role in improving access to jobs, food and health care. However, there are multiple factors for companies and government agencies to consider before sharing mobility data with other organizations, including the precision, immediacy, and type of data shared.”
FPF and the Privacy Tech Alliance released a report titled, “Privacy Tech’s Third Generation: A Review of the Emerging Privacy Tech Sector,” which analyzed the evolving privacy technology market, examined trends and predictions in the field, and identified five market trends and their implications for the future. The report focused on the COVID-19 pandemic’s role in accelerating the global marketplace adoption of privacy tech.
FPF held a series of workshops focused on manipulative design with technical, academic, and legal experts to define clear areas of focus for consumer privacy, and guidance for policymakers and legislators. These workshops looked at manipulative design through a variety of different contexts including youth and education, online advertising and U.S. law, and GDPR and European law. The issue of manipulative design, transparency, and trust was also discussed during the first annual Dublin Privacy Symposium, which was hosted by FPF.
In collaboration with the IBM Policy Lab, FPF released a set of recommendations to promote privacy and mitigate risks associated with brain-computer interfaces. The report provides developers and policymakers with actionable ways this technology can be implemented while protecting the privacy and rights of its users. Following the release of the report, FPF and the IBM Policy Lab hosted an online event discussing the report and the brain-computer interface field more broadly.
FPF recognizes the need for access to personal information for independent research and for platform accountability and supports this research when it is done responsibly. In November and December, FPF hosted a series of salon dinners titled, “Promoting Responsible Research Data Access,” which brought together the many voices needed for a robust conversation on how we can unlock data for scientific research and will lead to a playbook for privacy-protective research access to corporate data.
Expanding the Conversation Around Responsible Data Use
FPF continues to convene industry experts, academics, consumer advocates, and other experts to explore the challenging issues in the data privacy field. Members of our team have also testified in front of state and national legislative bodies as experts for potential privacy legislation.
For the 11th year in a row, FPF recognized leading privacy research and analytical work with the Privacy Papers for Policymakers Award held virtually for the first time. The winners spoke on their research in front of an audience of academic, industry, and policy professionals in the privacy field. The event was headlined by a keynote address by FTC Chairwoman Rebecca Kelly Slaughter, her first major speech as then acting chair of the FTC. In her remarks, she focused on making enforcement more efficient and effective, how to protect privacy during the pandemic, and the overlap of COVID-19 and issues related to privacy.
FPF launched a new training program in 2021 focused on the use of data-driven technologies. The Understanding Digital Data Flows training program provided a deep dive into how technology and personal data are utilized in a variety of sectors. The training sessions were led by FPF experts and discussed topics including artificial intelligence, de-identification, and more. These informative trainings will continue into 2022 and the first eight sessions are already open for registration.
In the same vein, FPF released a series of insights for lawyers to understand before advising clients on issues of artificial intelligence. Among the insights were an explanation of AI’s probabilistic, complex, and dynamic nature, the importance of transparency in AI use, and the issue that algorithmic bias presents to AI users.
Laws like ECOA, GDPR, CPRA, the proposed EU AI regulation, and others are forming a legal foundation for regulating AI… As more organizations begin to entrust AI with high-stakes decisions, there is a reckoning on the horizon.
To add to the conversation surrounding COPPA and verifiable parental consent, FPF released a report outlining suggested solutions collected through research and insights from stakeholders. In the report, key friction points in the verifiable consent process are identified, which include: efficiency, accessibility, privacy and security, and convenience and cost barriers. Throughout the year, FPF collected comments from industry professionals, advocates, and academics to help identify possible solutions to untangle the challenges associated with verifiable parental consent, which will inform our work in 2022.
Following the release of a report which provided recommendations on the use of augmented and virtual reality technologies, FPF hosted XR Week, a week dedicated to ethical and privacy concerns of AR and VR technologies. The week included several events including a roundtable with expert participants and several conversations held in a virtual reality space.
During debate over Maryland HB 1062, which proposed several updates to Maryland’s Student Data Privacy Act, FPF’s Amelia Vance testified in front of the Maryland House Ways and Means Committee on the bill. In her testimony, Amelia voiced her approval of many of the proposed updates and offered recommendations on two amendments, clarifying how the bill defines “operator,” and the scope of the Council’s recommendations.
The FPF Youth & Education team released a series of resources focused on school surveillance and student monitoring. In October, the team released an infographic, “Understanding Student Monitoring,” that depicts reasons schools monitor student activity, what types of data are being monitored, and how that data can be utilized. Following reports that the Pasco County (FL) Sheriff’s Office was keeping a list of students who may be “potential criminals,” FPF released a report advocating for transparency and accountability for parents and students, FERPA compliance, and more robust privacy training for law enforcement and SROs.
Earlier this month, Stacey Gray testified in front of the U.S. Senate Finance Subcommittee on Fiscal Responsibility and Economic Growth on consumer privacy in the technology sector. Her testimony focused on the term “data brokers” and explained how third-party data processing is central to many concerns around privacy, fairness, accountability, and crafting effective privacy regulation.
The FPF team welcomed many new faces during 2021 and saw the promotion of key staff members to senior positions. John Verdi became Senior Vice President of Policy, Amelia Vance was elevated to Vice President of the Youth & Education program, Gabriela Zanfir-Fortuna was promoted to Vice President for Global Privacy, and Stacey Gray was promoted to Director of Legislative Research & Analysis. This year, the leadership team also saw the addition of Amie Stepanovich as Vice President of U.S. Policy and Rebekah Stroman as Chief of Staff. 2021 also saw us welcome Clarisse Girot, Lee Matheson, Keir Lamont, Tatiana Rice, Nancy Levesque, Payal Shah, Joanna Grama, and Jim Siegl to the FPF staff.
“The FPF team has grown to meet the need for independent privacy expertise, especially in the international, youth & education, and policy spaces,” said Jules Polonetsky, CEO of FPF. “I could not be more proud of the high-quality work that the FPF staff has produced to increase understanding of how technology impacts civil and human rights. We’re looking forward to 2022 and wish everyone a Happy Holidays and a Happy New Year.”
This is by no means a comprehensive list of all of FPF’s important work in 2021, but we hope it gives you a sense of the impact that our work had on both the privacy community and society at large. Keep updated on FPF’s work by subscribing to our monthly briefing and following us on Twitter and LinkedIn.
On behalf of the entire FPF staff we wish you a Happy New Year!
Public Comments Surface Fault Lines in Expectations for New California Privacy Law
In November 2020, California voters adopted the California Privacy Rights Act (“CPRA”) ballot initiative, which was developed to strengthen and expand upon the underlying California Consumer Privacy Act (“CCPA”) that the state legislature adopted in 2018. While the CPRA provides for significant new consumer rights and responsible data processing obligations on covered businesses, many questions regarding the scope and practical operation of these requirements remain unresolved. A recently released set of public comments on a CPRA rulemaking process brings some of these contested issues into sharper focus.
The CPRA delegates both rulemaking and enforcement authority to a brand new, privacy-specific body, the California Privacy Protection Agency (“the Agency”). Following the appointment of a governing board, the Agency took its first public-facing steps towards rulemaking in September, 2021, issuing an invitation for comment on 8 topics focused on new and undecided issues introduced by the CPRA. Last week, the Agency published approximately 70 submissions that it received during the course of its 45-day comment period.
A variety of individuals and organizations filed comments including trade associations and companies representing diverse industry sectors, consumer rights groups, and academics. One noteworthy filing is from Californians for Consumer Privacy, a nonprofit organization helmed by Alastair Mactaggart. Given the group’s role in drafting the California Privacy Rights Act ballot initiative and driving the public advocacy campaign that led to its adoption, these comments are indicative of the intent behind some of the ambiguous and contested provisions of the CPRA.
Across hundreds of pages of comments, stakeholders displayed sharp disagreements on what the CPRA does and should require on multiple consequential issues. These contested topics for CPRA rulemaking include (1) how businesses should conduct and submit privacy and security risk assessments, (2) the ways that automated decisionmaking technologies shall be regulated, (3) whether the CPRA requires the recognition of user enabled opt-out signals, (4) the scope of the Agency’s audit authority, and (5) how the Agency should further define and regulate manipulative design interfaces known as “dark patterns.”
1. Privacy and Security Risk Assessments
The CPRA brings California into greater alignment with other global and domestic privacy frameworks by requiring organizations engaged in data processing that poses a “significant risk” to consumer privacy and security to conduct and submit to the Agency risk assessments on a “regular basis.” However, the CPRA leaves many details to Agency regulations, including the specific activities that trigger the requirement to conduct an assessment, the scope and procedures for completing assessments, and the cadence for submitting assessments to the Agency. Comments revealed a variety of preferences for how and when businesses should be required to conduct and submit assessments.
Filings from industry stakeholders frequently raised concerns that the adoption of overly formalistic procedures and reporting requirements for risk assessments would create unnecessary burdens to both businesses and the Agency. Multiple industry groups suggested that assessments should be submitted to the Agency only upon request (consistent with the Virginia and Colorado privacy laws), or, if mandatory, once every 3 years. Civil society organizations typically sought to impose more expansive assessment requirements on covered businesses, with one coalition arguing that assessments should be conducted in advance of any change in business practices that “might alter the resulting risks to individuals’ privacy,” and be resubmitted to the Agency at 6 month intervals.
Californians for Consumer Privacy encouraged the Agency to adopt a graduated approach, with requirements to conduct risk assessments initially falling on only large processors of personal information. The group further suggested variable timing requirements for submitting those assessments established on the basis of the “intensity” of personal information and sensitive personal information processing.
2. Automated Decisionmaking Technology
The CPRA directs the Agency to develop regulations “governing access and opt-out rights” with respect to the use of automated decisionmaking technology, (“ADT”) including “profiling.” The Agency sought comments on multiple aspects of these rights, including the activities that should constitute regulated ADT, what businesses should do to provide consumers with “meaningful information about the logic” of automated decisionmaking processes, and the scope of consumers’ opt-out rights with regards to ADT. Industry and civil society comments differed in how to define the scope of ADT and whether the CPRA creates a standalone consumer right to opt-out of ADT beyond the CPRA’s rights to opt-out of the sale and sharing of personal information and to limit the use of sensitive personal information.
Numerous commenters, including the Future of Privacy Forum, recommended that the Agency define the scope of regulated ADT to decisions that produce “legal or similarly significant effects” to consumers, noting a similar standard under the GDPR. Legal or similarly significant effects would include, for example, automatic refusal of an online credit application; decisions made by online job recruitment platforms; decisions that affect other financial, credit, employment, health, or education opportunities and likely, in certain contexts, behavioral advertising.
Several industry groups such as the California Grocers Association further sought to ensure that the regulations will govern only “fully” automated processes that produce “final” decisions. Supporting this analysis, many commenters pointed to a universe of clearly low-risk, socially beneficial tools such as calculators, spreadsheets, GPS systems, and spell-checkers that could be swept up by overly broad regulation. Civil society groups including EFF and EPIC largely took a different approach, arguing that given emerging concerns of algorithmic harm and bias, the Agency’s regulations should more broadly define ADT, to include, for example, “systems that provide recommendations, support a decision, or contextualize information.”
Notably, Californians for Consumer Privacy argued that the Agency’s regulations should “specify that consumers have the right to opt-out of this automated decisionmaking” (referencing the online advertising ecosystem), and that the Agency should subsequently expand the right to opt-out of ADT to “other areas of online and business activity.” In stark contrast to this view, several industry groups argued that the Agency cannot create a standalone consumer right to opt-out of ADT as such a right is not provided for in the CPRA itself. Two prominent trade associations, CTIA and TechNet, further asserted that such a delegation of rulemaking authority would be “unconstitutional.”
3. Opt-Out Preference Signals
One of the most high profile debates in the present consumer privacy landscape concerns the adoption of “user-enabled global privacy controls,” a potentially broad array of technical signals first recognized under the CCPA’s regulations. In July 2021, a California Attorney General FAQ page was updated to assert that one such tool, a browser-signal named the Global Privacy Control (“GPC”) “must be honored by covered businesses as a valid consumer request to stop the sale of personal information.” The public comments revealed stark differences in statutory interpretation as to whether or not the CPRA requires that businesses honor this class of controls.
Industry groups including ESA, California Retailers Association, and the California Chamber of Commerce largely adopted the interpretation that the text of the CPRA makes business recognition of opt-out preference signals optional, based on the reading that CPRA sections 1798.135(b)(1),(3) offer multiple paths to complying with the exercise of user rights. One exception came from Mozilla, which recently implemented the GPC in the Firefox browser, and noted that enforceability of preference signals under the CCPA “remains ambiguous” and encouraged the Agency to expressly require that companies comply with the GPC under the CPRA.
On the other hand, civil society organizations tended to argue that the CPRA expressly mandates the recognition of global signals, pointing to section 1798.135(e), which concerns the exercise of consumer rights (including by opt-out signals) carried out by other authorized persons. Consumer Reports argued that recognition of these signals is required by the “plain language” of this provision and also noted that this interpretation would be consistent with the CPRA’s stated purpose of strengthening the CCPA. Californians for Consumer Privacy also took a firm stance, arguing that “there is no reading of the statute that would allow a business to [refuse] to honor a global opt-out signal enabled by a consumer” and criticized “misinformation we have seen from the advertising and technologies industries” on the scope of CPRA opt-out rights.
In the Future of Privacy Forum’s comments, we noted that regardless of whether the recognition of global opt-out signals is mandated or voluntary, the Agency has an important opportunity to set clear standards for the adoption of signals that will comply with the CPRA, GDPR, or the Colorado Privacy Act (which will require recognition of certain preference signals by 2025). In this context, the Agency should work with expert stakeholders to address many unresolved operational issues, such as how signals should be interpreted if they conflict with other consumer choices, and establish procedures for the approval of new signals over time.
4. Agency Audit Authority
The CPRA empowers the Agency to conduct audits of businesses to ensure compliance with the Act. Again, many of the details for the breadth and conduct of such audits are left to rulemaking, and the Agency requested expansive feedback on issues including the scope of its audit authority, the processes that audits should follow, and the criteria the Agency should use in selecting businesses to audit.
Californians for Consumer Privacy stated that the Agency Auditor’s scope should “only be limited by whether a request is reasonably linked to a potential violation of the CPRA.” The group further argued that the Agency should leave the determination of its auditing criteria to its Executive Director and Auditor rather than through rulemaking, so as not to alert businesses to these factors.
In contrast, industry groups suggested multiple approaches to clearly defining audit authority and criteria. Popular recommendations include requirements that the Agency (1) have evidence of a violation of a substantive provision of the CPRA that risks significant harm to consumers prior to initiating an audit, (2) provide 90-days notice to a business prior to an audit, (3) impose guardrails to ensure that audits are separate and independent from the Agency’s investigation and enforcement teams, and (4) create “fair and equal treatment” rules for determining what companies are audited.
5. “Dark Patterns”
Finally, the Agency requested feedback on a number of definitions used by the CPRA including manipulative design interfaces known as “dark patterns.” The CPRA defines “dark patterns” as “a user interface designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision-making, or choice, as further defined by regulation.” The Act contains relatively limited prohibitions on their application, stating that the use of “dark patterns” invalidates user “consent” and further directs Agency rulemaking to ensure that web pages that permit users to opt back-in to the sale or use of their information under the CPRA do not utilize “dark patterns.” Nevertheless, the concept of “dark patterns” has received increasing regulatory attention in recent years and has been flagged by Agency Board Chairperson Urban as a potential subject for discussion at a forthcoming series of “informational hearings.”
Industry groups such as the Internet Association raised concerns with the definition of “dark patterns” under the CPRA, arguing that essentially any interface could be interpreted as impairing user choice and therefore be considered a “dark pattern” under the Act, including the use of privacy-protective default settings. Several of these organizations requested that the definition of “dark patterns” be narrowed to focus on design practices that amount to consumer fraud and encouraged forthcoming regulations to provide clear examples of such conduct.
In contrast, a group of Stanford academics led by Professor Jen King suggested regulation on this subject beyond the context of consent interfaces and specifically requested an expanded definition of “dark patterns” to encompass novel interfaces such as voice activated systems. Similarly, despite raising concerns with the suitability of the term “dark patterns,” Common Sense Media suggested defining manipulative designs “as broadly as possible” to include features that encourage children to share personal information.
Conclusion
The Agency’s request for comments has revealed significant divergences in policy and statutory interpretation between stakeholders for the appropriate scope and application of CPRA requirements. Forthcoming resolution of contested issues through Agency rulemaking will likely carry significant implications for the exercise of consumer rights under the CPRA as well as the practical compliance obligations for covered businesses. Interested parties will hope to learn more about the ultimate scope and operation of the CPRA in early-2022, when the Agency intends to publish its initial set of proposed regulations and statement of reasons.
Future of Privacy Forum Adds Amie Stepanovich, Additional Experts to U.S. & Global Policy Teams
New staff members add expertise and expand US policy engagement for independent data protection non-profit
The Future of Privacy Forum (FPF) has added three new members to its U.S. policy team and a senior fellow to its global team. Amie Stepanovich will join FPF as VP of U.S. Policy, Keir Lamont joins as Senior Counsel, Tatiana Rice joins as Policy Counsel, and Simon McDougall joins as Senior Fellow, Global. FPF’s Stacey Gray assumes a new role as Director of Legislative Research & Analysis.
“FPF welcomes a widely respected voice in privacy law to our team in Amie Stepanovich,” said John Verdi, FPF’s SVP of Policy. “Amie is a leading thinker with deep experience in privacy law and human rights, which makes her an invaluable advisor to policymakers, industry leaders, and academics studying the intersection of tech and data protection.”
Amie will join FPF in January of 2022 as VP of U.S. Policy. Before joining FPF, Amie served as the Executive Director of Silicon Flatirons, a center at the University of Colorado, Boulder focused on convening multi-stakeholder discussions and developing the next generation of technology lawyers and policy experts. Amie also previously served as U.S. Policy Manager and Global Privacy Counsel at Access Now where she worked to protect human rights through law and policy involving technologies and their use. Prior to her time at Access Now, Amie was the Director of the Domestic Surveillance Project at the Electronic Privacy Information Center. Amie has also served on FPF’s Advisory Board.
In a further expansion of FPF’s U.S. team, Keir Lamont and Tatiana Rice have joined the organization to focus on legislative research and analysis in the United States.
“Keir and Tatiana will grow FPF’s ability to serve as an independent voice on complex legislative and policy matters at the state and national levels,” said Stacey Gray, FPF’s newly-named Director of Legislative Research & Analysis. “As the national conversation about data privacy and tech ethics continues, FPF will support policymakers with informational resources on new technologies and regulatory approaches through our public engagement, publications, testimony, events, and other programs.”
Keir joins FPF as Senior Counsel on the legislation team, where he will support policymaker education and independent analysis concerning federal, state, and local consumer privacy laws and regulations. Prior to joining FPF, Keir worked as Policy Counsel for the Computer and Communication Industry Association, where he focused on issues related to privacy, security, and emerging technology. Before joining CCIA, Keir managed the Program on Data and Governance at The Ohio State University Moritz College of Law. He was previously a fellow at ZwillGen and Access Now.
Tatiana joins FPF as Policy Counsel on the legislation team, where she will analyze legal and legislative trends relating to data privacy and emerging technologies on both the federal and state levels. Tatiana comes to FPF from Shook, Hardy & Bacon LLP, where she led biometric compliance efforts and assisted clients with managing data privacy compliance, litigation, and investigation.
Simon McDougall joins FPF as a Senior Fellow, working closely with the FPF Global team. Simon previously was a member of the Executive Team and Management Board of the UK Information Commissioner’s Office. He established the Regulatory Innovation and Technology Directorate, led the ICO’s response to the Covid pandemic, and worked with the CMA, FCA, and Ofcom to establish the Digital Regulation Cooperation Forum. Prior to this appointment, Simon led a global privacy consulting practice at Promontory, an IBM company, leading projects across Europe, the U.S., and Asia. He previously led a similar team for Deloitte in the UK.
FPF brings together privacy experts to explore the challenges posed by technological innovation and develop privacy protections, ethical norms, and workable business practices. FPF believes lawmakers and regulators make better policy decisions when they understand the key technologies, business practices, and legal tools available to regulate privacy and data protection.
FPF’s Stacey Gray Testifies Before Senate Finance Committee Regarding Data Brokers, Urges Congress Pass a Comprehensive Federal Privacy Law
Today, Future of Privacy Forum Senior Counsel Stacey Gray testified before the U.S. Senate Finance Subcommittee on Fiscal Responsibility and Economic Growth regarding consumer privacy in the technology sector.
Stacey’s testimony explains that the term “data brokers” typically encompasses a wide variety of companies and business practices that use personal information for different purposes, some of which directly benefit consumers, and others that primarily benefit the purchasers or users of data. Recent laws and proposed bills define data brokers as entities without a direct relationship with consumers, and this third-party data processing is at the heart of concerns around privacy, fairness, and accountability; the third-party relationship also presents a challenge for crafting effective regulation. While a “first party” company that collects and uses personal data can exercise enormous influence and market power, there is still some degree of public accountability to users who are aware that the company exists and can delete accounts or raise alarms when practices go too far. In contrast, a business lacking a direct relationship with consumers – like a data broker – does not have the same reputational interests, business incentives, or in some cases legal requirements, to limit the collection of consumer data or protect it against misuse.
The lack of a consumer relationship also means that businesses engaged in legitimate data processing often cannot rely on the traditional privacy mechanisms of notice and choice. Meaningful affirmative consent, or “opt-in,” may be impossible or impractical for a business to obtain, while “opting out” after the fact tends to be both inadequate as a safeguard and impractical for consumers to navigate. Consumers can become overwhelmed with choices, and often lack the knowledge to assess future risks, complex technology, or future secondary uses.
What does this all mean? First and foremost, Congress should pass baseline comprehensive privacy legislation that establishes clear rules for both data brokers and first-party companies that process personal information. It should address the gaps in the current U.S. sectoral approach to consumer privacy; and it should incorporate but not rely solely on consumer choice: a privacy law should also codify clear limits on the collection of data; in accountability measures such as transparency; risk assessment and auditing; limitations on the use of sensitive data; and limits on retention.
In the absence of comprehensive legislation, there are a number of steps Congress can take to address risks related to consumer privacy and data brokers, including 1) empowering the Federal Trade Commission to continue using its authority to enforce against unfair and deceptive trade practices through funding; staff; the establishment of a Privacy Bureau; and civil fining authority; 2) limiting the ability of law enforcement agencies to purchase information from data brokers; 3) enacting sectoral legislation for uniquely high-risk technologies, such as facial recognition; or 4) updating existing laws, such as the Fair Credit Reporting Act, to more effectively cover emerging uses of data.
FPF will continue to provide expert testimony to governing bodies and organizations to shape privacy best practices and policies, both in the United States and globally.
“Are crumbles all that remains of the cookies?” A conversation on the future of ad tech at the Nordic Privacy Arena 2021
On September 27 and 28, 2021, the Swedish Data Protection Forum (Forum för Dataskydd) hosted the 2021 edition of the Nordic Privacy Arena (“Operationalising Data Privacy – Challenges, best practices, and success stories”) in Stockholm, Sweden. This hybrid event brought together privacy practitioners, watchdogs, and academics to debate some of the most pressing issues regarding privacy compliance, such as artificial intelligence (AI), cybersecurity risks, international data transfers, age-appropriate web design, and new enforcement trends.
The end of the first day saw a discussion on online advertising moderated by the Future of Privacy Forum’s Managing Director for Europe, Dr. Rob van Eijk. The panel, entitled “Algorithmic marketing and profiling – are crumbles all that remains of the cookies?”, counted on the valuable contributions of Dr. Anu Talus, Finish Data Protection Ombudsman (DPA), Michael Hopp, Partner and Head of the Plasner law firm’s Data Protection team, Anna Eidvall, Partner and Head of the MAQS Advokatbyrå’s law firm’s privacy and data protection practice, and Patrick Breyer, Member of the European Parliament (MEP) for the Greens/EFA group.
The session was divided into four parts, which are covered in this blogpost: (1) a debate on cookie consent tools: can browser settings do the job, as debate zooms in on data collection practices, notably around the suitability of relying on users’ browser settings?; (2) a discussion about the pros and cons of banning all or some targeted ad practices; (3) the speakers’ views on what to expect from ePrivacy Regulation negotiations over the coming months; and (4) an interesting exchange on whether contextual advertising is a silver bullet or a distant reality.
Cookie consent tools: can browser settings do the job?
Van Eijk started by pointing to the ways in which the Finish Telecom regulator (Traficom) recently-issued guidance on cookies and other tracking technologies advised service providers to collect website visitors’ consent. He noted that the guidelines drew inspiration from two decisions taken by the Helsinki Administrative Court in April 2021, by excluding browser settings from the list of appropriate means in which users may express their consent for the placement of cookies in their devices.
In response, Talus underlined that Traficom’s guidance had been issued only two weeks prior, after extensive work with her Office during the drafting process. She also expressed that she was pleased with the outcome, as it reflects the Data Protection Authority’s (DPA) longstanding position on cookie consent. Regarding browser settings, Talus stressed the difficulty of collecting GDPR-aligned consent through such means, although Recital (32) of the GDPR seems to indicate that this is theoretically possible.
Van Eijk then asked MEP Breyer about his thoughts on the Advanced Data Protection Control (ADPC) specification proposed by None of Your Business (NOYB), notably on whether this type of framework — similar to the Do-Not-Track (DNT) specification — have a future in the European regulatory discussion. In reply, Breyer stated that the ADPC addressed the issue of users’ “cookie-fatigue”, in that it proposes a practicable solution to enable the latter to make and website owners to respect those choices. He also took the view that such proposals could positively avoid leaving browser manufacturers free to establish default settings.
Then, the panel touched on the question: are online players unwilling to correctly configure their consent banners, in line with current legal standards? In this regard, Eidvall noted the complexity of the legal framework in this space, with an interplay of the GDPR with the ePrivacy Directive rules, and with Telecom and data protection regulators both playing a role in some jurisdictions, such as Sweden. For the speaker, the first dimension meant that consent from internauts should be sought at two levels: one at the moment of placing the cookie on/collecting device information from a user’s device and another for processing the data for ad targeting or other purposes. It also meant that controllers will often need to carry out Data Protection Impact Assessments (DPIA) and — after the Schrems II CJEU ruling — Transfer Impact Assessment (TIA), as well as comply with cumbersome information requirements towards users, the importance of which the recent DPC sanction against Whatsapp Ireland illustrated.
Furthermore, Eidvall added, some smaller businesses (such as online publishers) may be unwilling to change their current cookie practices, as they are often in a “do-or-die” situation: should they decide not to deploy behavioral ads, their revenues may significantly decrease. Thus, she argued that the change should be championed by large tech companies. However, she noted that significant change on their part is unlikely to come unless the risk of being sanctioned becomes higher than the business benefits of using cookies.
Hopp observed that companies are now questioning whether they are required to comply with the privacy rules that are effective in the jurisdiction where the placement of cookies takes place, or with those in their country of establishment, as there have been significant challenges to the latter view. He also noted that there is a lack of clarity around consent requirements when it comes to online tracking, which the EDPB tried to sort to some extent but that will hopefully be resolved by the new ePrivacy Regulation.
To wrap up the first topic, van Eijk highlighted that the EDPB has tried to reach some harmonization of EU DPAs’ views on cookie consent through its own guidance, which weighs in on “cookie walls” and user affirmative action following the Planet49 case. Additionally, some DPAs are specifically requiring consent to be as easy to refuse as to give, when it comes to placing cookies or similar technologies. National conflicting court decisions on cookie consent may also be a blind spot for companies with an online presence when devising compliance strategies.
Pros and cons of banning all or some targeted ad practices
To stir the debate during the panel’s second segment, van Eijk mentioned EU lawmakers’ discussions on the Digital Services Act’s (DSA) rules on targeted online advertising. In that context, some MEPs favor a strict and encompassing ban on those practices, while others favor narrower prohibitions or only enhanced transparency duties. The moderator was keen to hear speakers’ views, notably on whether consent could serve as a proportionate solution for legitimizing current ad tech practices.
Breyer looked back on last year’s European Parliament (EP) requests to the European Commission (EC) to phase-out personalized online ads, in favor of contextual ones, which do not rely on personal data processing. One of the reasons for which he does not support consent-based targeted ads relate to the fact that users are currently being deprived of real choices, due to the use of “dark patterns” that make it more cumbersome for them to reject tracking. Another reason mentioned by Breyer was that, even in cases where individuals are given a fair choice, there are societal issues associated with a targeted advertisement, leading to more than just individual harms. In this regard, he mentioned that the technology that is deployed by undertakings to understand and predict the behavior of online consumers is being leveraged to threaten democracies through the spread of disinformation.
The MEP also mentioned that targeted advertising generates issues in the online media landscape. One of the problems he identified was media outlets’ heavy loss of revenues to targeting companies and ad brokers. He believes that forcing online media to rely on contextual advertising — as newspapers and TV networks do — would create a level playing field that would enable the preservation of professional and quality media. Breyer also noted that, despite a growing number of EP lawmakers now believing that an opt-in standard for targeted online ads is not the solution, there does not seem to be a majority favoring a ban, which could also hamper the strength of the EP’s position in future negotiations with the Council of the EU on the DSA.
The conversation then shifted to the use of metadata in the context of Real-Time-Bidding (RTB) requests, and whether a specific ban there would be appropriate. Hopp answered in the negative, instead favoring reliance on self-regulation instruments and clearer regulatory guidance on online advertising. He mentioned that, nonetheless, there are areas in which all the players in the ecosystem should agree that targeting ads is not possible, such as deliberately rendering consumer loan ads to individuals with a high interest in online betting. On the other hand, he also proposed that regulators could prohibit certain practices by relying on the GDPR’s general Article 5 principles, regardless of whether controllers rely on consent or legitimate interests to carry out personalized advertisement.
Eidvall concurred, stating that legal bans are seldom effective. Instead, the speaker advised companies in the online advertising space to look at the issue from a data ethics perspective. She stressed that undertakings ought to start thinking about whether certain processing operations that are technically possible are also morally sound, before implementing their digital marketing strategies.
This led to a debate about whether this type of reflection actually happens in self-regulatory frameworks, and about how enforcement takes place in such scenarios. Is it fair to leave it to browser and app manufacturers to shape the ecosystem by limiting what ad tech providers can technically do as Apple did with its App Tracking Transparency? Eidvall took a positive view of such developments, including Google’s phasing-out of third-party cookies, which is scheduled for 2023. She also stressed the importance of avoiding turning privacy into a class issue, which could be done by allowing users who wish to pay with their data to do so, while ensuring that alternative payment methods are available to all.
Van Eijk then took stock of varying cookie banner configurations and enforcement trends that are seen across Europe, with the French CNIL’s compliance notices and NOYB’s letters to website owners aiming at fixing some practices. He wondered about the part that enforcement standard-setting bodies and trade organizations, such as the Interactive Advertising Bureau (IAB), could play in the future.
On this note, Hopp acknowledged the importance of the IAB’s role in relation to its members but focused on what consumers could do to change the paradigm. He noted that, as more people become aware of their privacy rights, it is possible that the number of complaints in the face of infringements will increase. He finished by admitting that some providers may be making deliberate choices to overlook compliance in this realm to maximize their revenues and that collecting valid consent may not suffice to place them under a good light in the public eye.
On whether the design of fair opt-in mechanisms for online targeting would help fight ubiquitous dark patterns, Breyer observed that users tend to reject tracking when they are given a meaningful choice, as illustrated by Apple’s iOS 14.5 launch. Nonetheless, he noted that website owners who deploy “cookie walls” argue that they generally manage to obtain users’ consent. According to the MEP, this is due to the fact that the majority of cookie banners do not provide fair choices to users, as it is currently hard for them to identify the correct path to reject tracking in most websites. The panelist added that it should not be possible to subject a user to consent requests each time they open a new website, nor for website owners to reject access in case users refuse consent. He argued that the information that data brokers can gather about internauts is often very sensitive and that it could be used to manipulate or blackmail the latter. This, according to Breyer, reinforces the argument for banning targeted ads, also because research has shown that publishers’ revenues are not meaningfully affected in case they replace personalized ads with contextual ones.
What to expect from ePrivacy Regulation negotiations?
van Eijk invited the speakers to make some predictions about how and how fast the ePrivacy Regulations trialogue between the EU lawmakers will progress, also given that France will take over the Council’s Presidency in January 2022.
Breyer pointed out that France has taken a very harsh stance in ePrivacy negotiations within the Council, notably coming up with data retention language for the Council’s negotiating mandate. After stressing that the Court of Justice of the European Union (CJEU) has consistently ruled that indiscriminate data retention for law enforcement purposes breaches the EU Charter of Fundamental Rights, the MEP revealed that the EP is not willing to compromise at any price in the ePrivacy saga. He predicted that the EP would not accept watering down the existent level of electronic communications confidentiality protection under the ePrivacy Directive, in particular when it comes to the purpose limitation principle.
Talus identified the ePrivacy Regulation as an opportunity for the EU to clarify DPAs’ competencies when it comes to enforcing electronic communications privacy rules. Currently, many countries — including Finland — reserve enforcement powers to their Telecom regulators in this space. Talus believes that companies and individuals do not benefit from the blurring of each authority’s competencies, and that when it comes to personal data processing, DPAs should take the lead, also to ensure the coherent application of the GDPR and ePrivacy norms.
Eidvall stated that, regardless of whether the French Presidency will be able to advance ePrivacy negotiations, mounting enforcement and self-regulation — but also data subject awareness — is likely to happen. In response to a question raised by van Eijk on the impact that the upcoming final Belgian DPA decision in the IAB RTB case promises to have on self-regulation instruments, Eidvall mentioned other relevant inspections that are ongoing, like the ones triggered by NOYB’s complaints.
Hopp expressed that regulators are expected to come up with a solution to the cookie conundrum even if the ePrivacy Regulation is not approved. On van Eijk’s question of whether the GDPR already provides grounds for banning dark patterns and conditional consent practices (like cookie walls), Hopp underlined that the question of consent validity is clearly answered in the GDPR, including when it comes to “mandatory consent” practices in news websites.
Contextual advertising: a silver bullet or a distant reality?
Following Breyer’s calls for a paradigm shift towards contextual online ads, the moderator referred to how the Dutch public broadcaster (NPO) applied such techniques and actually bolstered its advertising revenues. Therefore, he asked the panelists whether the innovation chances in the contextual advertising sphere were worthy of further exploration.
Eidvall mentioned that her clients often express interest in using anonymization techniques in the online advertising space, to find alternatives that would be equally effective without processing personal data. However, she noted that anonymization itself qualifies as “processing” under the GDPR. In any case, she reported on a number of initiatives that seek to eliminate personal data from the process, also relying on ethical approaches as a unique selling point.
Hopp noted his clients’ lack of appetite for combining, e.g., differential privacy with contextual ads for measuring the reach of their ad campaigns. Instead, he highlighted their concerns about the phasing out of third-party cookies and their wishes to deploy first-party cookies for ad measurement. In this regard, Hopp took the view that anonymizing first-party data for strict measurement purposes should not be legally necessary, as long as companies comply with the purpose limitation principle and do not leverage it for user profiling.
To conclude, van Eijk stated that the lawfulness of first-party data use in the online context depends on the impact on the rights and freedoms of individuals, as well as the nature of the data at stake. In the moderator’s view, processing browsing behavior, children’s and special categories of data for targeting purposes may have unbearable risks. He pointed to groups who are trying to reach a consensus on what is “privacy by design” in the online advertising context, such as working groups at the W3C. In this regard, it is worth keeping an eye on the change announced by Google to move away from the FLoC identifier to more topic-based data as a more privacy-friendly solution changing the paradigm of the online advertising ecosystem.
Organizations must lead with privacy and ethics when researching and implementing neurotechnology: FPF and IBM Live event and report release
A New FPF and IBM Report and Live Event Explores Questions About Transparency, Consent, Security, and Accuracy of Data
The Future of Privacy Forum (FPF) and the IBM Policy Lab released recommendations for promoting privacy and mitigating risks associated with neurotechnology, specifically with brain-computer interface (BCI). The new report provides developers and policymakers with actionable ways this technology can be implemented while protecting the privacy and rights of its users.
“We have a prime opportunity now to implement strong privacy and human rights protections as brain-computer interfaces become more widely used,” said Jeremy Greenberg, Policy Counsel at the Future of Privacy Forum. “Among other uses, these technologies have tremendous potential to treat people with diseases and conditions like epilepsy or paralysis and make it easier for people with disabilities to communicate, but these benefits can only be fully realized if meaningful privacy and ethical safeguards are in place.”
Brain-computer interfaces are computer-based systems that are capable of directly recording, processing, analyzing, or modulating human brain activity. The sensitivity of data that BCIs collect and the capabilities of the technology raise concerns over consent, as well as the transparency, security, and accuracy of the data. The report offers a number of policy and technical solutions to mitigate the risks of BCIs and highlights their positive uses.
“Emerging innovations like neurotechnology hold great promise to transform healthcare, education, transportation, and more, but they need the right guardrails in place to protect individuals’ privacy,” said IBM Chief Privacy Officer Christina Montgomery. “Working together with the Future of Privacy Forum, the IBM Policy Lab is pleased to release a new framework to help policymakers and businesses navigate the future of neurotechnology while safeguarding human rights.”
FPF and IBM have outlined several key policy recommendations to mitigate the privacy risks associated with BCIs, including:
Rethinking transparency, notice, terms of use, and consent frameworks to empower people around uses of their neurodata;
Ensuring that BCI devices are not allowed for uses to influence decisions about individuals that have legal effects, livelihood effects, or similar significant impacts—such as assessing the truthfulness of statements in legal proceedings; inferring thoughts, emotions or psychological state, or personality attributes as part of hiring or school admissions decisions; or assessing individuals’ eligibility for legal benefits;
Promoting an open and inclusive research ecosystem by encouraging the adoption of open standards for the collection and analysis of neurodata and the sharing of research data with appropriate safeguards in place.
Policymakers and other BCI stakeholders should carefully evaluate how existing policy frameworks apply to neurotechnologies and identify potential areas where existing laws and regulations may be insufficient for the unique risks of neurotechnologies.
FPF and IBM have also included several technical recommendations for BCI devices, including:
Providing hard on/off controls for users;
Allowing users to manage the collection, use, and sharing of personal neurodata on devices and in companion apps;
Offering heightened transparency and control for BCIs that send signals to the brain, rather than merely receive neurodata;
Utilizing best practices for privacy and security to store and process neurodata and use privacy enhancing technologies where appropriate; and
Encrypting sensitive personal neurodata in transit and at rest.
FPF-curated educational resources, policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are available here.
Read FPF’s four-part series on Brain-Computer Interfaces (BCIs), providing an overview of the technology, use cases, privacy risks, and proposed recommendations for promoting privacy and mitigating risks associated with BCIs.
Dispatch from the Global Privacy Assembly: The brave new world of international data transfers
The future of international data transfers is multi-dimensional, exploring new territories around the world, featuring binding international agreements for effective enforcement cooperation and slowly entering the agenda of high level intergovernmental organizations. All this surfaced from notable keynotes delivered during the 43rd edition of the Global Privacy Assembly Conference, hosted remotely by Mexico’s data protection authority, INAI, on October 18 and 19.
“The crucial importance of data flows is generally recognized as an inescapable fact”, noted Bruno Gencarelli, Head of Unit for International Data Flows and Protection at the European Commission, at the beginning of his keynote address. Indeed, from the shockwaves sent by the Court of Justice of the EU (CJEU) with the Schrems II judgment in 2020, to the increasingly poignant data localization push in several jurisdictions around the world, underpinned by the reality that data flows are at the center of daily lives during the pandemic with remote work, school, global conferences and everything else – the field of international data transfers is more important than ever. Because, as Gencarelli noted, “it is also generally recognized that protection should travel with the data”.
Latin America and Asia Pacific, the “real laboratories” of new data protection rules
Gencarelli then observed that the conversation on international data flows has become much more “global and diverse”, technically shifting from the “traditional transatlantic debate” to a truly global conversation. “We are seeing a shift to other areas of the world, such as Asia-Pacific and Latin America. This doesn’t mean that the transatlantic dimension is not a very important one, it’s actually a crucial one, but it is far from being the only one”, he said. These remarks come as the US Government and the European Commission have been negotiating for more than a year a framework for data transfers to replace the EU-US Privacy Shield, invalidated by the CJEU in July 2020.
In fact, according to Gencarelli, “Latin America and Asia-Pacific are today the real laboratories for new data protection rules, initiatives and solutions. This brings new opportunities to facilitate data flows with these regions, but also between those regions and the rest of the world”. The European Commission has recently concluded adequacy talks with South Korea, after having created the largest area of free data flows for the EU with Japan, two years ago.
“You will see more of that in the coming months and years, with other partners in Asia and Latin America”, he added, without specifying what jurisdictions are immediate in the adequacy pipeline. Earlier in the conference, Jonathan Mendoza, Secretary for Personal Data Protection at INAI, had mentioned that Mexico and Colombia are two of the countries in Latin America that have been engaging with the European Commission for adequacy.
However, before the European Commission officially communicates about advanced adequacy talks or renewal of pre-GDPR adequacy decisions, we will not know what those jurisdictions are. In an official Communication from 2017, “Exchanging and protecting personal data in a globalized world”, the Commission announced that, “depending on progress towards the modernization of its data protection laws”, India could be one of those countries, together with countries from Mercosur and countries from the “European neighborhood” (this could potentially refer to countries in the Balkans or the Southern and Eastern borders, like Moldova, Ukraine or Turkey, for example).
“Adequacy” of foreign jurisdictions as a ground to allow data to flow freely has become a standard for international data transfers gaining considerable traction beyond the EU in new legislative data protection frameworks (see, for instance, Articles 33 and 34 of Brazil’s LGPD, Article 34(1)(b) of the Indian Data Protection Bill with regard to transfers of sensitive data, or the plans recently announced by the Australian government to update the country’s Privacy Law, at p. 160). Even where adequacy is not expressly recognized as a ground for transfers, like in China’s Personal Information Protection Law (PIPL), the State still has an obligation to promote “mutual recognition of personal information protection rules, standards etc. with other countries, regions and international organizations”, as laid down in Article 12 of the PIPL.
However, as Gencarelli noted in his keynote, at least from the European Commission’s perspective, “beyond that bilateral dimension work, new opportunities have emerged”. He particularly mentioned “the role regional networks and regional organizations can play in developing international transfer tools.”
One example that he gave was the model clauses for international data transfers adopted by ASEAN this year, just before the European Commission adopted its new set of Standard Contractual Clauses under the GDPR: “We are building bridges between the two sets of model clauses. (…) Those two sets are not identical, they don’t need to be identical, but they are based on a number of common principles and safeguards. Making them talk to each other, building on that convergence can of course significantly facilitate the life of companies present in ASEAN and in the EU”.
The convergence of data protection standards and safeguards around the world “has reached a certain critical mass”, according to Gencarelli. This will lead to notable opportunities to cover more than two jurisdictions under some transfer tools: “[they] could cover entire regions of the world and on that aspect too you will see interesting initiatives soon with other regions of the world, for instance Latin America.
This new approach to transfers can really have a significant effect by covering two regions, a significant network effect to the benefit of citizens, who see that when the data are transferred to a certain region of the world, they are protected by a high and common level of protection, but also for businesses, since it will help them navigate between the requirements of different jurisdictions.”
Entering the world of high level intergovernmental organizations and international trade agreements
One of the significant features of the new landscape of international data transfers is that it has now entered the agenda of intergovernmental fora, like the G7 and G20, in an attempt to counter data localization tendencies and boost digital trade. “This is no longer only a state to state discussion. New players have emerged. (…) If you think of data protection and data flows, we see it at the top of the agenda of G7 and G20, but also regional networks of data protection authorities in Latin America, in Africa, in Europe”, Gencarelli noted.
One particular initiative in this regard, spearheaded by Japan, was extensively explored by Mieko Tanno, the Chairperson of Japan’s Personal Information Protection Commission (PIPC) in her keynote address at the GPA: the Data Free Flow with Trust initiative. “The legal systems related to data flows (…) differ from country to country reflecting their history, national characteristics and political systems. Given that there is no global data governance discipline, policy coordination in these areas is essential for free flow of data across borders. With that in mind, Japan proposed the idea of data free flow with trust at the World Economic Forum annual meeting in 2019. It was endorsed by the world leaders of the G20 Osaka summit in the same year and we are currently making efforts in realizing the concept of DFFT”, Tanno explained.
A key characteristic of the DFFT initiative, though, is that it emulates existing legal frameworks in participating jurisdictions and does not seem to propose the creation of new solutions that would enhance the protection of personal data in cross-border processing and the trust needed to allow free flows of data. Two days after the GPA conference took place, the G7 group adopted a set of Digital Trade Principles during their meeting in London, including a section dedicated to “Data Free Flow with Trust”, which confirms this approach.
For instance, the DFFT initiative specifically outsources to the OECD solving the thorny issue of appropriate safeguards for government access to personal data held by private companies, which underpins both the first and second invalidation by the CJEU of an adequacy decision issued by the European Commission for a self-regulatory privacy framework adopted by the US. While the OECD efforts in this respect hit a roadblock during this summer, the GPA managed to adopt a resolution during the Closed Session of the conference on Government Access to Personal Data held by the Private Sector for National Security and Public Safety Purposes, which includes substantial principles like transparency, proportionality, independent oversight and judicial redress.
However, one interesting idea surfaced among the proposals related to DFFT that the PIPC promotes for further consideration in these intergovernmental fora, according to Mieko Tanno: the introduction of a global corporate certification system. No further details about this idea were shared at the GPA, but since the DFFT initiative will continue to make its way through agendas of international fora, we might find out more information soon.
One final layer of complexity added to the international data transfers debate is the intertwining of data flows with international trade agreements. In his keynote, Bruno Gencarelli spoke of “synergies that can be created between trade instruments on the one hand and data protection mechanisms on the other hand”, and promoted breaking down silos between the two as being very important. This is already happening to a certain degree, as shown by the Chart annexed to this G20 Insights policy brief, on “provisions in recent trade agreements addressing privacy for personal data and consumer protection”.
An essential question to consider for this approach is, as pointed out by Dr. Clarisse Girot, Director of FPF Asia-Pacific, when reviewing this piece, “how far can we build trust with trade agreements?”. Usually, trade agreements “guarantee an openness that is appropriate to the pre-existing level of trust”, as noted in the G20 Insights policy brief.
EU will seek a mandate to negotiate international agreements for data protection enforcement cooperation
Enforcement cooperation for the application of data protection rules in cross-border cases is one of the key areas that requires significant improvement, according to Bruno Gencarelli: “When you have a major data breach or a major compliance issue, it simultaneously affects several jurisdictions, hundreds of thousands, millions of users. It makes sense that the regulators who are investigating at the same time the same compliance issues should be able to effectively cooperate. It also makes sense because most of the new modernized privacy laws have a so-called extraterritorial effect”.
Gencarelli also noted that the lack of effectiveness of current arrangements for enforcement cooperation for privacy and data protection law surfaces especially when it is compared to other regulatory areas, like competition and financial supervision. In those areas, enforcers have binding tools that allow “cooperation on the ground, exchange of information in real time, providing mutual assistance to each other, carrying out joint investigations”.
In this sense, the European Union has plans to create such a binding toolbox for regulators. “The EU will, in the context of the implementation of the GDPR, seek a mandate to negotiate such agreements with a number of international partners”, announced Bruno Gencarelli in his keynote address.
The more than 130 privacy and supervisory authorities from around the world that are members of the GPA are very keen on enhancing and permanentalizing their cooperation, both in policy matters and enforcement, as is evident from the Resolution on the Assembly’s Strategic Direction for 2021-2023 adopted by the GPA during this year’s Conference, under the leadership of Elizabeth Denham and her team at the UK’s Information Commissioner’s Office. This two-year Strategy proposes concrete action, such as “building skills and capacity among members, particularly in relation to enforcement strategies, investigation processes, cooperation in practice and breach assessment”. The binding toolbox for enforcement cooperation that the EU might promote internationally will without a doubt boost these initiatives.
In a sign that, indeed, the data protection and privacy debate is increasingly vibrant outside traditional geographies for this field, Mexico’s INAI was voted as the next Chair of the Executive Committee of the GPA and entrusted to carry out the GPA’s Strategy for the next two years.
Video recordings of all Keynote sessions at this year’s GPA Annual Conference are available On Demand on the Conference’s platform for the attendees that had registered for the event.
FPF Files Comments on CPRA Initial Rulemaking
Yesterday, the Future of Privacy Forum filed comments with the California Privacy Protection Agency on the initial rulemaking under the California Privacy Rights Act (CPRA). The CPRA, which comes into effect in 2023, provides protections for sensitive personal information, expands the California Consumer Privacy Act’s opt-out rights, and requires businesses to provide mechanisms for individuals to access, correct, and delete data.
FPF offered resources and recommendations regarding automated decisionmaking, sensitive personal information, global opt-out signals, and de-identification. Among our comments, we suggest that regulations under the CPRA should:
Establish guidelines for automated decisionmaking (ADM) that produces “legal or similarly significant effects.”
Provide that information about “automated decisionmaking” follow NIST interpretability guidelines, and be meaningful and reasonably understandable to the average consumer.
Clarify a range of potential use cases for health and wellness data, by providing a principled, exemplar list of categories that are in or out of scope. In many cases, such distinctions will be based on context and reasonable use.
Ensure opportunities for socially beneficial commercial research using sensitive personal information.
Clarify the role of global opt-out signals in the context of today’s labyrinth of existing permission frameworks, including in authenticated and non-authenticated platforms.
Establish an open process for authoritative approval of new global opt-out signals that meet the technical specifications of the Agency over time.
Seek further input from de-identification experts and researchers to clarify key implementation issues for “deidentified data,” including the role of technical, legal, and administrative controls, and Privacy Enhancing Technologies (PETs).
Event Report from DigitalxADB: Driving Digital Development across Asia and the Pacific
On October 27, the Future of Privacy Forum (FPF)’s Asia-Pacific office and the Asian Development Bank (ADB) co-hosted an online event titled, “Trade Offs or Synergies? Data Privacy and Protection as an Engine of Data Driven Innovation” in the context of DigitalxADB. This edition was the third in ADB’s series of annual knowledge-sharing events for representatives of ADB’s 68 member countries and external partners to learn about and take part in efforts to further integrate “digital” into ADB.
1. Background
By way of a background, ADB was conceived in the early 1960s as a financial institution that would be Asian in character and foster economic growth and cooperation in one of the poorest regions in the world. Despite the region’s many successes, it remains home to a large share of the world’s poor: 263 million living on less than US$1.90 a day and 1.1 billion on less than US$3.20 a day. ADB assists its members, and partners, by providing loans, technical assistance, grants, and equity investments to promote social and economic development. ADB maximizes the development impact of its assistance by facilitating policy dialogues, providing advisory services, and mobilizing financial resources through co-financing operations that tap official, commercial, and export credit sources.
For FPF, the co-organization of this digital policy dialogue with an international organization as important in the region as the ADB was an opportunity to manifest its intention to be useful to the data protection and privacy community in Asia through a large variety of means. FPF Asia-Pacific sees its role as a platform for cooperation that is both expert and neutral capable of supporting all kinds of actions that can contribute to the development of best practices in data protection and privacy, to help bridge the gaps between law and practice, and advance thought leadership and support coherent policy development in this area. Such cooperation must involve a wide variety of stakeholders, whether from the public or private sectors, national or regional, where appropriate in partnership with international organizations.
2. Key takeaways
This event consisted of two panel discussions.
The first, titled “Industry Expectations and Cooperation with Privacy Regulators in Asia,” was moderated by Yoonee Jeong (Senior Digital Specialist, ADB) and attended by panelists Marcus Bartley-Johns (Asia Regional Director, Government Affairs and Public Policy, Microsoft), Yen Vu (Principal and Country Manager, Rouse Vietnam), and Royce Wee (Director, Head of Global Public Policy, Alibaba Group).
The second, titled “To Be or to Become a Privacy Regulator in Asia in the 2020s: What Challenges, What Role for International Cooperation?” was moderated by Dr. Clarisse Girot (Director for Asia Pacific, FPF) and attended by panelists Michael McEvoy (Information and Privacy Commissioner, British Columbia, Canada, and Chair, Asia Pacific Privacy Authorities Forum – APPA), Zee Kin Yeong (Assistant Chief Executive, Infocomm Media Development Agency—IMDA, and Deputy Commissioner, Personal Data Protection Commission – PDPC, Singapore), and Prof Thitirat Thipsamritkul (Faculty of Law, Thammasat University, and Vice President of the Digital Council of Thailand).
This post summarizes the discussions in these two stellar panels and highlights key takeaways:
There is growing momentum for data protection and privacy in Asia. In 2020/21 alone,Singapore, Japan, South Korea, New Zealand, China and Thailand have upgraded or passed their data protection laws, while Brunei, India, Indonesia, Vietnam, and Sri Lanka among others move closer to adopting data protection frameworks of their own. Panellists Yen Vu and Thitirat Thipsamritkul shared first-hand experiences with development of data protection legislation in Vietnam and Thailand, respectively, while Zee Kin Yeong and Michael McEvoy shared their national and international experience as seasoned regulators in Singapore and British Columbia, Canada.
A key consideration for data protection law in Asia is finding the right balance between convergence with global standards and adaptation to local conditions. As more data protection laws in Asia tend to be developed with reference to frameworks and policies from outside Asia, policymakers in Asia must find a way to integrate data protection and privacy principles with Asia’s unique histories, cultures, and values to ensure that data protection laws win support from both businesses and citizens.
Data protection and privacy laws are most effective when made and implemented in partnership with businesses, industry associations, and civil society, as well as data protection regulators. Regulators and organisations can each learn important lessons from one another and, together with other key stakeholders, collaborate on tackling shared challenges and taking advantage of shared opportunities in the digital economy.
It is fundamental to support the development of the community of data protection regulators in Asia, whether through actions to support the development of national regulators, or regional cooperation networks such as they are developing, in this region as elsewhere. Based on experience, the top priority of regulators must be placed on education of businesses, government, and citizens, and equipping them with the right knowledge, tools and capabilities to ensure the effectiveness of the data protection law.
Trust, transparency, and accountability are key for businesses operating in Asia. Panellist Marcus Bartley-Johns related how Microsoft has come to recognize that Asian consumers, especially young people, are privacy-conscious and eager to understand how companies use their data. Similarly, panellist Royce Wee explained how trust is a key ingredient for a secure, inclusive, and sustainable digital economy, and increasing trust and transparency can create a win-win situation for consumers and businesses alike. In this regard, data protection laws play an important role to foster that trust.
What challenges to address, and what roles for ADB and FPF?
Thomas F. Abell (Advisor, SDCC and Chief of Digital Technology for Development, ADB) gave the introductory speech to the event and shared his insights into how the COVID-19 pandemic had accelerated the digital economy in Asia Pacific as the region increasingly relies on “digital.” 2020 was a record year in terms of member governments’ demand for ADB’s digital development programmes – roughly 20% of ADB’s projects in 2020 involved a significant digital component. Going forward, ABD is looking to increase support for its member governments in this area, from working on digital programs and security, to seeking thought leaders to drive digital development initiatives, to launching a new program in data analytics early next year.
Dr. Clarisse Girot(Director for Asia Pacific, FPF) explained how global activities have taken on an increasingly important dimension in FPF’s work, with the development of regional offices in Europe, Israel, and most recently, Asia with the recent launch of FPF’s Asia Pacific Office in Singapore. In Asia Pacific, an essential mode of action will be to forge partnerships, run joint events, and bring together businesses, citizens, and international organisations to support governments and regulators in their efforts to adopt laws and policies that address growing privacy expectations, raise the level of data protection, and ultimately, support economic growth and digitalisation in the region, especially in the wake of COVID-19.
From this point of view, the ambitions of FPF and ADB on these issues are completely complementary. This event is an opportunity to explore with the panelists what could be their priority actions in this area, if necessary joint actions.
Dr. Girot further highlighted the tension between Asia’s status as not only the most populous but also most economically dynamic region in the world and the fact that data protection laws, for historical more than for political reasons, tend to be developed with reference to instruments, frameworks, and policies that have been designed and developed elsewhere – the EU’s General Data Protection Regulation (GDPR) being a case in point. Dr. Girot stressed the need to ensure that national frameworks are compatible with global standards that are necessary in a world where data flows are ubiquitous and underlie the digital economy.
But more prosaically, there is also a need to address challenges that have blocked adoption of data protection and privacy laws in some jurisdictions where they have been announced as “imminent” for several years. Passing a data protection law is not easy, even less today than in the past. A major challenge in Asia is how to articulate data protection laws with the “geopolitically loaded” concept of “data sovereignty” – a concept which has taken root specifically in China and India and looks to spread elsewhere. Another blocking factor is the legitimate concern that data protection and privacy laws would impose administrative constraints and compliance costs for local businesses, thereby restricting innovation and blocking trade. As well, baseline data protection laws intersect with sectoral laws, so that a lot of finetuning is required. Defining the material scope of the law is not easy. Such fear also extends to the decision whether to institute a data protection and privacy regulator and provide it with powers and control over governments, among others.
To address these challenges, regional and international cooperation, and cooperation between the public and private sectors, academia and civil society, is essential. Events like DigitalxADB are thus an opportunity to demonstrate the wealth of resources that international cooperation brings. They also help to identify the multiple ways in which both public and private actors, including FPF and ADB, can contribute by providing support for governments and regulators in Asia to tackle these challenges—be it financial, material, or “intellectual”.
The two panel discussions were set up to approach these subjects from two complementary angles.
Panel 1: “Industry Expectations and Cooperation with Data Protection and Privacy Regulators in Asia”
This first panel moderated by Yoonee Jeong was comprised of industry representatives from different backgrounds, who share the same difficulties in complying with fluctuating and variable data protection rules in the region. During the conversation, each panelist was asked how they envision that ADB or FPF could usefully contribute to addressing these challenges.
Below is a synthesis of the main comments made by each panelist in the course of the conversation.
Marcus Bartley-Johns(Asia Regional Director, Government Affairs and Public Policy, Microsoft) opened his comments by lauding the efforts by ADB and FPF for coming together to convene this dialogue, and underlining the great value which lies in the combination of ADB’s unique convening power and ability to work with countries across the region on these issues, and FPF’ capacity to share expertise globally on what’s happening in privacy regulation and a lot of deep connections with the privacy community across Asia. He went on to share two key insights from Microsoft’s view of data protection and privacy issues around the Asia Pacific region.
The first is that privacy is essential for both organisations and individuals across Asia, and therefore, effective privacy regulation is central to growth of the digital economy across Asia. In this respect, Microsoft and research firm IDC conducted a surveyof the perceptions and expectations of trust in digital services of more than 6000 consumers in this region in 2019. 53% of those consumers reported feeling that their personal privacy had been compromised or that their trust had been breached when using digital services. A higher share of respondents who reported negative experiences were young people. This challenges the oft-held assumption that because young people – especially in Asia – are high consumers of digital services, they do not care about privacy. A further example is that of the 19 million unique visitors to Microsoft’s privacy dashboard in 2020, Australia, China, Japan, Korea, and India were all in the top 20 countries of visitors who came to view, export, or delete their data.
The second is that opportunities for collaboration on data protection and privacy abound. Organisations like FPF and ADB (among other stakeholders) can play a key role in developing privacy regulation through providing resources and technical assistance to countries that are thinking about privacy regulation and consultation to countries that are drafting new privacy regulations or amending their existing regulations. In particular, regulation needs to be technology-neutral as there is a temptation among regulators in Asia to look for an easy technical fix – such as contractual terms – to demonstrate privacy protection.
There are also opportunities for regional cooperation to counter the trend of countries working in “silos,” leading to a fragmented regulatory framework that will not support trade and investment and will increase costs for local companies – especially Small and Medium Enterprises (SMEs), which unlike large multinational companies (MNCs) cannot invest significant funds and employ hundreds of full-time engineers to transform their data management. In this regard, Singapore has been instrumental in driving greater regulatory coherence in ASEAN. More work on interoperability is needed to ensure that compliance will be as straightforward as possible for SMEs while still keeping a high bar for privacy protection cross the region’s regulatory landscape.
Yen Vu(Principal and Country Manager, Rouse Vietnam) shared the experiences of Vietnam as the country developed its first personal data protection decree, which she hopes will be passed and take effect by the end of this year.
Despite facing technological, economic, and societal challenges, Southeast Asia has an opportunity to become a digital economy hub for Asia. For example, even as large parts of Vietnam were under strict lockdown due to COVID-19, its Internet-based economy still reported growth in transportation, food, e-commerce, and fintech. The challenges come from an ever-shifting regulatory environment in both Vietnam and the region, as well as the need for training and awareness-building for both the public and private sectors.
In 2020, Vietnam became one of the first countries internationally to announce a programme for national digital transformation. Data protection will be key to this digital transformation programme, which aims to develop digital government, economy, and society and to equip Vietnamese digital businesses with global capacity in key areas – including healthcare, education, finance, banking, agriculture, transportation, energy, natural resources, the environment, and industrial protection – over the next decade.
However, the situation on the ground is one of regulatory fragmentation as Vietnam still lacks an omnibus law on data privacy. This has caused confusion and poses challenges for business across all sectors, which must often seek guidance from the government on how to comply with requirements under security laws, such as data localization. There are opportunities for international organisations like FPF and ADB to support Vietnam, especially through capacity-building activities for both the public and private sectors.
Royce Wee(Director, Head of Global Public Policy, Alibaba Group) highlighted that now is a very interesting time to be in Asia because more and more Asian countries are coming up with data protection laws. Thailand recently joined Singapore, the Philippines, and Malaysia as jurisdictions which already have data protection laws in place, and Brunei, Indonesia, Vietnam, and India move closer to adopting new data protection laws. China is also a major mover in this space, having passed a trio of data-related laws in a short time – the Cybersecurity Law, Data Security Law, and most recently, the Personal Information Protection Law (PIPL) which came into effect at the start of November 2021.
These data protection laws are not homogeneous but rather, reflect each country’s philosophies, outlooks, and values as well as its unique needs and circumstances. Data protection is not a solely European construct, and each country has to strike a balance between individual rights and control on the one hand and reasonable/legitimate business needs on the other hand.
This can create significant challenges for MNCs like Alibaba Group, whose compliance policies must be localized to meet each jurisdiction’s standards and requirements. In this respect, MNCs typically adopt a “high watermark” say set by the EU GDPR as a starting point and then make adjustments based on specificities in local data protection laws.
However, this is only a narrow view of data protection. Trust remains an overarching objective for these laws and is a key ingredient for a secure, inclusive, and sustainable digital economy. For organisations, trust helps to build long-term relationships with customers in which customers will be more willing to provide more and better-quality data, and organisations will be better placed to provide high-quality services and value-for-money products to meet their customers’ needs.
For regulators, trust in the digital economy allows for greater economic development and dynamism and can help to bridge the digital divide, opening the digital economy to greater participation from all segments of society while also creating better jobs with higher incomes by matching skills and demand and enabling better policy implementation.
The road to trust is one of constant, iterative improvement because – due to fast-paced changes in technology, business models, consumer expectations, and even societal values – the journey never really has an end in sight.
Regulators play an important role in pushing businesses to do more and to do better in a spirit of partnership and goodwill, rather than adversity. At the same time, businesses play an important role in uplifting data protection standards across the board. While MNCs have an important signalling effect, the real power to “move the needle” for data protection standards and processes comes from SMEs as they represent the vast majority of businesses in Asia. Regulators can do a lot to bring SMEs on board by issuing guidelines, providing clarity on their regulatory intent, and supplying tools and technological solutions. For example, in Singapore, the Infocomm Media Development Authority (IMDA) come up with “tech packs” containing solutions that SMEs can easily adopt and adapt to meet their business needs while ensuring at least a minimum baseline data.
Cooperative partnership between regulators and businesses is a prerequisite to develop the right culture of data accountability for organizations. Regulators should explain their regulatory objectives, concerns, and priorities but also understand the constraints and limitations in businesses’ daily operations. Similarly, businesses should understand these regulatory objectives, concerns, and priorities, but also provide feedback as part of the consultation process before new laws are passed, to ensure that the laws are practical and effective and that businesses can comply with them. For example, if left to their own devices, some regulators in Asia have a strong tendency to include data localisation into their laws. However, as the digital economy is essentially borderless, this can harm cross-border data flows necessary for e-commerce and the adoption of cloud solutions.
International organisations like FPF and ADB can, through their thought leadership and convening power, play an important role by contributing to the law-making process, especially through innovative projects such as sandboxing schemes, exploring different models for data processing, innovation, and even valuation, promoting harmonisation of baseline global principles and standards for data protection, to work with/across regulators and businesses to create mechanisms to allow/facilitate greater trusted and secure border data flows, and promoting discussion and agreement on an ethical framework for data processing that includes emerging technologies such as artificial intelligence, machine learning, and the Internet of Things.
By sharing resources and expertise, regulators and businesses can build trust and solve common problems and achieve common objectives – from improving the transparency of data processing, to putting in place adequate security standards and agreeing on common criteria/list of reasonable and legitimate uses of personal data, to reskilling and upskilling workers for new jobs in the digital economy.
Panel 2: “To Be or to Become a Privacy Regulator in Asia in the 2020s: What Challenges, What Role for International Cooperation?”
This second panel moderated by Dr. Clarisse Girot was comprised of two data protection regulators (Yeong Zee Kin and Michael McEvoy) and of an expert involved in the lawmaking process in Thailand (Prof Thitirat Thipsamritkul). Below is a synthesis of the main comments made by each panellist in the course of the conversation.
Professor Thitirat Thipsamritkul(Faculty of Law, Thammasat University, Vice President of the Digital Council of Thailand) shared her experience with the development of a draft personal data protection law in Thailand, which was ultimately passed in 2019.
Historically, data protection and privacy had been seen as a side issue which was not as essential to Thailand’s digital economy as, for example, cybercrime, cybersecurity, and intellectual property law. Little by little, privacy law became more central to the discussion with the emergence of the EU GDPR and efforts by the public sector, academia, and civil society to bring privacy into legislative discussions around the digital economy. By 2019, with the passage of the Cybersecurity Law, the zeitgeist was that if Thailand needed a cybersecurity law, then it also needed a data protection law.
The legislative process for the resultant Personal Data Protection Act (PDPA) was unique in that it involved extensive collaboration between the public and private sectors, academia, and civil society. In particular, academia was instrumental in shaping the PDPA as it had already created “shadow regulation” in the form of the Thailand Data Protection Guidelines (TDPG) to help Thai companies to comply with the EU GDPR and do business with Europe. The Guidelines were widely used by Thai businesses and drew not only on international standards but also input from local businesses and organisations on the practicality of data protection measures. Even after the PDPA was passed, the Guidelines remained influential for businesses designing their compliance schemes.
Thai society is now ready to comply with the PDPA but has been occupied with the response to COVID-19 for the last year. Due to resistance to the PDPA from certain sectors of the economy, the Thai government postponed the PDPA’s entry into effect twice. There is generally a fear that the PDPA gives too wide a discretion to regulators and the courts and that the courts’ interpretation would be uncertain as the PDPA introduces an entirely new framework into Thai law, also because due to stringent provisions on criminal liability for breach of the Act.
The postponements have sparked a debate in Thailand as to whether privacy laws should be strengthened or whether the compliance burden should be reduced as a result of the pandemic. However, at the same time, many businesses, including those in the financial and health insurance sectors, have been declaring new privacy protective measures and policies even before the PDPA takes effect.
On a broader note, many of the data practices in Asia differ significantly from those in Europe or America – for example, Asia has a lot of online shopping livestreams, which are much less common in Europe and Asia. This means that each region must adopt different methods for implementing data protection and privacy principles, even if these core principles remain the same around the world. However, a shared problem for regulators around the world is capacity-building – this is where international cooperation can be most effective.
Zee Kin Yeong(Deputy Commissioner, PDPC, Singapore) started with a word of encouragement for Thailand and explained that even Singapore’s journey to enacting data protection legislation started with a voluntary, industry-created model code, which was introduced in 2001-2002 – a decade before Singapore enacted its own PDPA in 2012. This was a necessary and helpful step to full legislation as local online businesses voluntarily adopted the code and began to prepare for full data protection legislation.
Yeong Zee Kin had three areas of advice for governments and policymakers who are data protection and privacy:
the necessity for convergence with global norms when designing laws;
equipping businesses and companies with practical tools to implement the principles within their organisations; and
valuing partnerships with the data protection community and data protection officers, who can act as champions to help to build the data protection ecosystem.
On convergence with global norms, he stressed that nowadays, data “can’t be kept in a bottle” as it flows everywhere – both within and between economies around the globe – especially as companies operate in multiple jurisdictions. Therefore, it is essential to design laws to adhere to accepted global principles to the greatest extent possible because such familiarity is important from the perspectives of both compliance and the expectation of consumers and data subjects. An example of such a global principle is the admonition against localisation of computing facilities. Other relevant global principles can be found in the OECD Privacy Guidelines, the APEC Privacy Principles, and for Southeast Asia, the ASEAN Principles for Data Protection, as well as free-trade agreements like the CPTPP and RCEP.
At the same time, it is also necessary to adapt laws to local conditions – society, culture, and history. The recent amendments to Singapore’s PDPA, which were passed a year ago, illustrate the importance of convergence as well as adaptation to local conditions. In the amendments, Singapore adopted the concept of “legitimate interests” because it had become common in multiple data protection regimes worldwide. However, Singapore also recognized that its local businesses wanted clarity and found a concept as broad and generous as legitimate interests difficult to work with. In implementing the concept, Singapore therefore took a slightly different approach to other regimes and listed out specific examples of legitimate interests in the Schedule to the PDPA. Singapore also took the unique step of creating a “business improvement exception” based on suggestions by local companies but still required express consent, rather than legitimate interests or business improvement, for direct marketing based on feedback from local consumers.
Between convergence with global norms and adaptation to local conditions, we will probably see more regional groupings in data protection laws as factors like geographical proximity heavily influence culture and history, which in turn influence expectations of and approaches to data protection. We should encourage these regional groupings and cooperation – if regulators and policymakers can come together and find a common level, then we might end up with three or four regional groupings, which could then start building bridges between regions to encourage global consistency and convergence.
On equipping businesses with practical tools, Yeon Zee Kin recommended that regulators place themselves in the shoes of local business owners and managers who would need to implement principles in legislation. Regulators can use the kinds of common business objectives that companies care about, such as inventory management, analysis of sales performance, and management of customer and HR records, as an entry point for discussing how data can be used to achieve those objectives while also embedding good data protection principles into the process. It is also important to recognize that businesses often need external help. To that end, Singapore’s PDPC curated a brief list of core data protection practices and provided a list of outsourced data-protection-as a-service providers who could help business owners and managers with compliance.
Michael McEvoy(Information and Privacy Commissioner, British Columbia, and Chair, APPA Forum) agreed that there are many examples in which jurisdictions go through a transition period from having voluntary standards, guidelines, and principles to having full data protection legislation but added that in some cases, legislation may be a result of pressure from civil society, a shift to a more reformist government, or even simply a fluke of circumstances. However, even where the legislation seems to come to fruition suddenly, it is usually the product of many years of work and efforts to educate legislators.
Voluntary industry efforts in British Columbia – such as data breach notification although there is not yet a legal obligation making notification mandatory in the province– can be the start of good practice as they create a culture and environment of compliance. In experience, businesses generally want to “do the right thing” but they might not be able to figure out how to do it. As well, it is also certain that there is a fear. Misplaced fear on the part of businesses about regulator enforcement powers and in general that regulators may not understand the nature of innovative businesses may delay the adoption of a complete regulatory framework in some jurisdictions. Enforcement is certainly important. But the solution to address such concerns is first and foremost for regulators to go out and educate businesses and in some cases, governments, on what the “right thing” is and provide guidance, education, and assistance.
As personal data follows the flow of trade, more and more countries are waking up to the need for effective, sustainable, and trustworthy regulation for an increasingly digital world. This idea underpins the work of the Asia Pacific Privacy Authorities (APPA) Forum over the past 30 years to nurture and promote data protection in the Asia Pacific region. While initially there was not a lot of interest in APPA’s work, there definitely is now: British Columbia, which is home to APPA’s Secretariat and does approximately $14 billion worth of export trade with Pacific Rim countries, has come to recognize the importance of data protection to digital trade, and its legislature now supports APPA financially.
APPA’s 19 members – all data protection regulators in Asia Pacific – assist one another and share information and techniques to enhance their regulatory expertise. APPA has also extended a hand to other jurisdictions outside this region, such as recently in the Cayman Islands. Countries that are now considering implementing new data protection laws, where none previously existed, are fortunate in that these countries can learn from the experiences of jurisdictions that have gone through this process, adapting what is useful and avoiding the regulatory missteps that unfortunately happen from time to time. No two countries’ data protection laws will ever be identical because country is informed by its own history and culture. However, countries across the globe share a commitment to have at least some commonality, especially in allowing data to flow more freely and securely. In this respect, the GDPR and the concept of adequacy have been very helpful in the search for common ground and convergence on principles for protecting citizens’ data while encouraging trade, innovation, and flow of data.
The session thus ended on a very encouraging note.
To conclude, ADB and FPF thanked the speakers and announced that they would consider joint actions to support positive data protection developments in the region, in the spirit of cooperation which animated the whole of this session.
This blog was written with the support of the Global Privacy team of the Future of Privacy Forum.
Data Sharing … By Any Other Name
Data Sharing … By Any Other Name(Would Still Be a Complex, Multi-stakeholder Situation)
“It is widely agreed that (data) should be shared, but deciding when and with whom raises questions that are sometimes difficult to answer.”[1]
Data sets held by commercial and government organizations are an increasingly necessary and valuable resource for researchers. Such data may become the evidence in evidence based policymaking[2] or the data used to train artificial intelligence.[3] Some large data sets are controlled by government agencies, non-governmental organizations or academic institutions, but many are being accumulated within the private sector. Academic researchers value access to this data as a way to measure any number of consumer, commercial, and scientific questions at a scale they are unable to reach using conventional research data gathering techniques alone. Such data allows researchers access to information that allows them to answer questions on topics ranging from bias in targeted advertising, to the influence of misinformation on election outcomes, to early diagnosis of diseases through use of health and physiological data collected by fitness and health apps.
Recent attention on platform data sharing for research is only one conversation in the cacophony of cross-talk on data sharing. There are many different uses of the term “data sharing” to describe a relationship between parties who share data from one organization to another organization for a new purpose. Some uses of the term data sharing are related to academic and scientific research purposes, and some are related to transfer of data for commercial or government purposes. In this moment, where various types of data sharing are a concern elevated even to the attention of the US Congress and the European Commission[5], it is imperative that we are more precise which forms of sharing we are referencing so that the interests of the parties are adequately considered, and the various risks and benefits are appropriately contextualized and managed. In the table at bottom, we outline a taxonomy for the multiplicity of data sharing relationships.
Ultimately, the relationships between these entities are complex. In many cases, the relationship is 1-to-many, with a single agency or corporation sharing data with multiple researchers and civil society organizations or, as in the case with data trusts or data donation platforms, potentially one person sharing data with many research or commercial organizations through a trusted, intermediate, steward.[6] Likewise, researchers and civil society organizations may concurrently pursue data from multiple corporate or government organizations, in many cases for the ability to address those challenges that require extremely large quantities of data (Big Data) or complex networks of related data. This data flow is never just along a single channel, nor does it often stop after a single transfer. Governments and corporations share data with researchers; researchers return that data, generate new data, and share analysis and new questions and outcomes back around.
Managing these complex relationships requires multi-layered contracts, defined procedures, accountability mechanisms, and other technical and policy controls. The terms for data sharing cover obligations that both parties have, including privacy, ethics, governance, and other good stewardship protocols. Changes in the legislative landscape around data protection, privacy, and security mean that these relationships must adjust periodically to meet legal compliance obligations, on the data sharing or data using side.
At the Future of Privacy Forum, we are working to add context, nuance, and a considered evaluation of the needs of these many players to create guidelines and best practices to support data sharing, particularly for conduct of scientific and evidence-based policy research. What data is shared, under what conditions, controls, contracts, and use environments all have important privacy and governance implications. We have been actively working in this area since 2015, and continue to engage with various interested organizations around the challenges in today’s digital environment. With respect to the sharing of data itself, FPF is focused on finding ways to incorporate proportionate precautions so that any sharing activities adequately protect privacy and are designed with the full understanding of potential harms to the people whose data is transferred or the communities of which they are a part.
Data Sharing Relationships
Data Sharing Organization Type
Data Using Organization Type
Outcome of Data Sharing
Terms to Describe Data Sharing Relationship
Government Agencies
Researchers and Research Institutions
Researchers conduct evidence based evaluations of public programs
Researchers whose work is sponsored by corporations or who have privileged access to corporate data assets return data gathered for future corporate research and, in many cases, retain copies of that data for future scientific work
Researchers whose work is sponsored by or conducted under a government contract return data gathered for future agency research and, in many cases, retain copies of that data for future scientific work
Citizen groups, journalists, and communities of interest (e.g., patient advocacy groups) can gain access to data about themselves gathered during the research process so that they can use it for future treatment, advocacy, or research participation
Return of research data and/or research results[18]
Researchers and Research Institutions
Researchers and Research Institutions
Researchers can reuse other researchers’ data or combine their primary and others’ secondary data to answer novel questions without having to put people at risk of research harms by conducting further research with them
Archives can collect the primary data from multiple researcher to streamline the process of acquiring data to answer novel questions by re-examining data and not putting people at risk for research related harms by conducting further research with them
Data Stewardship bodies, such as Data Trusts or Data Donation Platforms
Researchers and Research Institutions; Government Agencies, Private Companies or Corporations
Individuals and groups share their data with others according to their interests as specified to and protected by a trusted, fiduciary, actor.
Data Trusts, Data Donation
[1] HHS Office of Research Integrity, ORI Introduction to RCR. https://ori.hhs.gov/content/Chapter-6-Data-Management-Practices-Data-sharing
[2] H.R.4174 – 115th Congress (2017-2018): Foundations for Evidence-Based Policymaking Act of 2018. (2019, January 14). https://www.congress.gov/bill/115th-congress/house-bill/4174
[3] “The Biden Administration Launches the National Artificial Intelligence Research Resource Task Force”. https://www.whitehouse.gov/ostp/news-updates/2021/06/10/the-biden-administration-launches-the-national-artificial-intelligence-research-resource-task-force/
[4] Goroff, Daniel, Jules Polonetsky, and Omer Tene. (2018). Privacy Protective Research: Facilitating Ethically Responsible Access to Administrative Data. The Annals of Political and Social Science, Vol 675, Issue 1, pp. 46-66. https://doi.org/10.1177/0002716217742605.
Harris, Leslie and Chinmayi Sharma. (2017). Understanding Corporate Data Sharing Decisions: Practices, Challenges, And Opportunities for Sharing Corporate Data with Researchers. Future of Privacy Forum. https://fpf.org/wp-content/uploads/2017/11/FPF_Data_Sharing_Report_FINAL.pdf.
[5] European Commission. (2021). “A European Strategy for Data” https://digital-strategy.ec.europa.eu/en/policies/strategy-data
[6] Open Data Institute. (2020). “Data Trusts in 2020”. https://theodi.org/article/data-trusts-in-2020