Future of Privacy Forum Releases Student Monitoring Explainer
On October 27, FPF released a new infographic, “Understanding Student Monitoring,” depicting the variety of reasons why schools monitor student digital activities, what types of student data are being monitored, and how that data could be used. While student monitoring is not new, it has gained significant traction recently due to the shift to remote learning and the increase in school-managed devices being issued to students.
“Student monitoring has been happening for years, but too often families only learn about it after their child has been flagged or they’ve read something in the news. And that lack of transparency creates questions and confusion about how exactly it works, and what is – and is not – being monitored,” said Amelia Vance, FPF’s Vice President of Youth and Education Privacy. “We hope that this infographic will help parents, students, educators, policymakers, and other stakeholders understand generally how student monitoring works and what it aims to do, and ultimately become empowered to ask questions about the monitoring products being used in their own districts, as there is often considerable variation.”
The infographic depicts the main reasons why schools monitor student activity online—ensuring student safety, legal compliance, and addressing community concerns—and highlights two areas of frequent confusion: what types of student data are being monitored, and how that data could be used.
While school administrators work with their chosen service provider to set up a monitoring system that meets their school’s needs, student data can be collected in multiple ways, including from:
School-Issued Devices: any student data that travels through an internet connection, wired or wireless, on a school-managed device.
School-Managed Internet Connections: data from students’ online content or activities on school-managed internet connections, potentially including take-home internet hotspots.
School Apps & Accounts: student data from certain school-managed accounts, regardless of whether students access the accounts from personal devices or personal internet connections at home.
Monitoring systems analyze student data from these sources for potential concerning indicators, which are typically related to warning signs of self-harm, violence, bullying, vulgarity, pornography, or illegal behaviors. Some systems flag content for human review. From there, depending on the nature and severity of the flagged content and monitoring system in place, several actions could occur. The student could be sent a warning, the content could be blocked, or a designated school contact could be alerted. These actions are explored in further depth in FPF’s accompanying blog.
“Many school administrators, students, and families may be aware that monitoring systems seek to identify concerning indicators from students’ online activities, but there is often less understanding about what occurs once a system does flag concerning activity,” said Yasamin Sharifi, a Policy Fellow in FPF’s Youth and Education Privacy team. “FPF’s new infographic clarifies the analysis, actions and data retention that a monitoring system and school may perform. This understanding is crucial for any stakeholder seeking to comprehend the practical impacts of a student monitoring system.”
The Future is Open: The U.S. Turns to Open Banking
FPF is pleased to work with a broad set of stakeholders on concepts around privacy and open banking. For more information on our new Open Banking Working Group and related projects, please contact Jeremy Greenberg: [email protected].
Open banking is a concept that describes banks and other financial institutions, such as credit unions, providing rights to customers over their financial data, including the ability to access, share or port data to third parties for various services.
The inherent tensions found in open banking between privacy, competition, and data portability requirements mirror similar concerns across the spectrum of Big Data.
Current challenges to a widespread and healthy open banking ecosystem in the U.S. include a lack of harmonized rules and principles for maintaining strong privacy protections involving financial data and an absence of standardized technical architecture.
The Consumer Financial Protection Bureau (CFPB) will take the lead on facilitating open banking in the U.S. and crafting rules regarding data protection and security; the CFPB should consider lessons learned from international approaches.
Open banking proponents and policymakers should be mindful of the unique sensitivity of financial information and the complex data protection risks raised by increased sharing of banking data—even when sharing is directed by consumers.
Introduction
In July 2021, President Biden signed the Executive Order on Promoting Competition in the American Economy. The Executive Order takes a “whole of government approach” to enforcing antitrust laws across the economy, with clear implications for data protection and privacy. Notably, the order encourages the Consumer Financial Protection Bureau (CFPB) to consider crafting rules under section 1033 of the Dodd-Frank Act in support of open banking with the goal of making it easier for consumers to safely switch financial institutions and use novel and innovative financial products while maintaining privacy and security.
The Order’s callout signals that the Biden administration views open banking as an important initiative for promoting consumer choice, fostering competition, and protecting consumers’ privacy. The debate around open banking highlights tensions between privacy and competition along with a number of privacy flashpoints including: data portability, access, sharing, transparency, control, and interoperability.
Open Banking Can Provide New Rights and Benefits to Consumers and Help Spur Competition, But Technical and Privacy Challenges Remain
Open banking is a concept that describes banks and other financial institutions, such as credit unions, providing rights to customers over their financial data, including the ability to share data or permissions over their data with third parties for various services. These rights include the right to access their financial data, port their data and switch financial institutions, and grant permission to third parties to carry out transactions and provide financial services to best meet a customer’s needs. For example, individuals could grant access to their financial data to a third party to complete an automated payment or provide tailored financial planning advice based on a consumer’s individual finances or credit history. Proponents of open banking argue that another benefit is increased competition among financial institutions. Firms entering into the financial sector may offer novel services that spur competition across the industry.
One current challenge to a widespread and healthy open banking ecosystem in the U.S. is a lack of harmonized rules and principles for maintaining strong privacy protections involving financial data. As a result, some traditional banking institutions concerned with maintaining strong customer privacy might be hesitant to support an open banking ecosystem that lacks clear and strong privacy rules and principles that equal, or exceed, the current financial privacy and security protections afforded to consumers by regulations such as the Gramm-Leach-Bliley Act (GLBA) or the Fair Credit Reporting Act (FCRA).
Another roadblock to widespread and privacy-protective open banking is the need for standardized technical architecture—particularly interoperable APIs— to enable the safe portability of financial data. A standardized and interoperable API would allow third parties to carry out their services on behalf of customers without accessing certain personal information, such as various login credentials. In the absence of widely adopted secure APIs, third parties sometimes turn to screen scraping to perform services, while collecting customer login credentials and other personal information, leading to potential privacy and security risks such as exposing consumer information in the case of a data breach and consumer impersonation. While current industry efforts such as the Financial Data Exchange’s (FDX) API are underway, a lack of standardized rules and technical standards, such as machine readable file rules, can lead to less privacy-protective methods of third parties accessing data.
The CFPB Will Continue Taking the Lead on Facilitating Open Banking in the U.S, While Considering Lessons Learned from International Approaches
Prior to the Executive Order, the CFPB has taken some preliminary steps to promote safe open banking in the U.S. In 2017, the agency released a set of broad non-binding principlesintended “to help foster the development of innovative financial products and services, increase competition in financial markets, and empower consumers to take greater control over their financial lives.” Key areas of focus include: data access (enabling consumers to obtain financial information in a timely manner without being compelled to share account credentials with third parties); informed consent (in which consumers understand terms & conditions with the ability to readily revoke authorizations granted to third parties); payment authorization (in which third parties are required to obtain specific authorization for distinct activities); efficient and effective accountability mechanisms (incentivizing stakeholders to prevent, detect, and resolve unauthorized access, sharing, and payments); among several other areas.
The CFPB next weighed in on the issue in 2020 when it held the CFPB Symposium: Consumer Access to Financial Records where experts discussed many of the concepts highlighted in the agency’s principles. Following the symposium, in October 2020, the CFPB initiated an Advanced Notice of Proposed Rulemaking on consumer access to financial records and how the agency might develop rules for implementing section 1033 of the Dodd-Frank Act. This is the same rulemaking effort highlighted in the Executive Order. The agency sought comments on the costs and benefits of open banking, and how the agency might handle many of the data protection-related concepts outlined in its 2017 principles, including: access, control, privacy, security, and standard setting. The CFPB has not issued a final rule or concluded the rulemaking, but the agency recently listed data sharing in its current regulatory agenda. Other than the CFPB, The Federal Reserve, Federal Deposit Insurance Corporation (FDIC) and the Office of the Comptroller of the Currency (OCC), released a Proposed Interagency Guidance on Third Party Relationships: Risk Management, focusing on banks managing risks in their third-party relationships with fintech companies, vendors, and other affiliates. While other regulators are involved in this space, the CFPB appears poised to return to their rulemaking effort as a near-term priority for the agency.
While the U.S. is serious about responsibly regulating and setting standards for open banking, other international models are well down this path. In 2015, the EU released an updated Payment Services Directive (PSD2), which went into effect in 2018. PSD2 aims to promote competition, privacy, and data transfer between EU countries and institutions. However, some PSD2 requirements, such as rules around consent, can significantly differ from requirements found in the GDPR and other European laws, leading to a lack of harmonization and confusion for consumers, regulators, and financial institutions. Other leading open banking approaches include recent efforts in the UK, Australia, Brazil, Israel, India, Canada, Mexico, and others. The technical standards and requirements around open banking will likely have to be harmonized between different regimes to promote the international and cross-border nature of the global economy.
Open Banking Highlights Broader Questions about Data Portability, Competition, and Cross-Border Data Flows
While the Executive Order sends a trumpet blast to regulators, consumers, and financial stakeholders that open banking is a priority area for the current administration, many of the data protection themes at play are much broader than open banking and touch multiple industries. The inherent tensions found in open banking between privacy and competition—such as the need to keep data private and in trusted hands vs. new players obtaining access or control over data for various purposes–exists across the spectrum of Big Data. Further, open banking helps animate the current debate and recent interest around data portability requirements from agencies such as the FTC. Ultimately, the need for interoperable rules and technical measures are not only necessary for beneficial and safe open banking, but for other international and cross-border data exchanges.
Future of Privacy Forum Promotes Verdi, Zanfir-Fortuna & Vance
FPF has promoted three of its leaders to more senior roles at the growing international non-profit. John Verdi has been elevated to Senior Vice President of Policy, Dr. Gabriela Zanfir-Fortuna has been appointed Vice President of Global Privacy, and Amelia Vance is now Vice President of Youth and Education Privacy.
For more than five years, John Verdi has been integral to FPF’s success as a mentor to our staff and an advisor to privacy leaders in the public and private sectors. Our international, youth and education programs are respected resources for civil society, policymakers, and companies because of the leadership of Gabriela Zanfir-Fortuna and Amelia Vance. These three appointments reflect the growth of FPF as data protection issues impact organizations around the world.
Jules Polonetsky, FPF CEO
As Senior Vice President of Policy, John Verdi supervises FPF’s policy portfolio, which advances FPF’s agenda on a broad range of issues. Verdi came to FPF in 2016 after serving as the Director of Privacy Initiatives for the National Telecommunications and Information Administration, where he crafted policy recommendations for the U.S. Department of Commerce and the Obama Administration on technology and innovation. Verdi previously oversaw the Electronic Privacy Information Center’s litigation program as General Counsel.
In Gabriela Zanfir-Fortuna’s new role as Vice President for Global Privacy, she will lead FPF’s work on global privacy developments, advising on EU data protection law and policy and working with FPF’s offices in Europe and Asia Pacific, as well as partners around the world. Zanfir-Fortuna gained years of experience in EU and international privacy law while working for the European Data Protection Supervisor in Brussels, as well as the Article 29 Working Party.
As Vice President of Youth and Education Privacy, Amelia Vance advises policymakers, academics, companies, and schools on child and student privacy laws and best practices; oversees the Student Privacy Compass website; and convenes stakeholders to ensure the responsible use of child and student data. She is a regular speaker at privacy and education conferences in the U.S. and abroad, has testified before Congress, spoken on child and education privacy issues for the Federal Trade Commission and U.S. Department of Education, and is part of the group of experts reviewing the OECD revised recommendations on the protection of children online.
Over her five years at FPF, Vance has grown the youth and education privacy project to 12 full-time staff. She came to FPF after serving as the Director of Education Data and Technology at the National Association of State Boards of Education. Prior to that role, she was a legal fellow at the Institute of Museum and Library Services and the Family Equality Council, an intern at the White House, the State Department, and the Office of Congressman Sander Levin, and a Field Organizer for the 2008 Obama campaign.
Five Things Lawyers Need to Know About AI
By Aaina Agarwal, Patrick Hall, Sara Jordan, Brenda Leong
Note: This article is part of a larger series focused on managing the risks of artificial intelligence (AI) and analytics, tailored toward legal and privacy personnel. The series is a joint collaboration between bnh.ai, a boutique law firm specializing in AI and analytics, and the Future of Privacy Forum, a non-profit focusing on data governance for emerging technologies.
Behind all the hype, AI is an early-stage, high-risk technology that creates complex grounds for discrimination while also posing privacy, security, and other liability concerns. Given recent EU proposals and FTC guidance, AI is fast becoming a major topic of concern for lawyers. Because AI has the potential to transform industries and entire markets, those at the cutting edge of legal practice are naturally bullish about the opportunity to help their clients capture its economic value. Yet to act effectively as counsel, lawyers must also be vigilant of the very real challenges of AI. Lawyers are trained to respond to risks that threaten the market position or operating capital of their clients. However, when it comes to AI, it can be difficult for lawyers to provide the best guidance without some basic technical knowledge. This article shares some key insights from our shared experiences to help lawyers feel more at ease responding to AI questions when they arise.
I. AI Is Probabilistic, Complex, and Dynamic
There are many different types of AI, but over the past few decades, machine learning (ML) has become the dominant paradigm.[1] ML algorithms identify patterns in recorded data and apply those patterns to new data to try to make accurate decisions. This means that ML-based decisions are probabilistic in nature. Even if an ML system could be perfectly designed and implemented, it is statistically certain that at some point it will produce a wrong result. All ML systems incorporate probabilistic statistics, and those systems can make incorrect classifications, recommendations, or other outputs.
ML systems are also fantastically complex. Contemporary ML systems can learn billions or more rules from data and apply those rules on a myriad of interacting data inputs to arrive at an output recommendation. Embed that billion-rule ML system into an already-complex enterprise software application and even the most skilled engineers can lose track of precisely how the system works. To make matters worse, ML systems decay over time, losing their use-case fitness based on their initial training data. Most ML systems are trained on a snapshot of a dynamic world as represented by a static training dataset. When events in the real world drift, change, or crash (as in the case of COVID-19) away from the patterns reflected by that training dataset, ML systems are likely to become wrong more frequently and cause issues that require legal and technical attention. Even in the moment of the “snapshot,” there are other qualifiers for the reliability, effectiveness, and appropriateness of training data. How it’s collected, processed, and labeled all bear on whether it is sufficient to inform an AI system in a way fit for a given application or population.
While all this may sound intimidating, an existing regulatory framework addresses many of these basic performance risks. Large financial institutions have been deploying complex decision-making models for decades, and the Federal Reserve’s model risk management guidance (SR 11-7) lays out specific process and technical controls that are a useful starting point for handling the probabilistic, complex, and dynamic characteristics of AI systems. Most commercial AI projects would benefit from some aspect of model risk management, whether it’s being monitored by federal regulators or not. Lawyers at firms and in-house alike, who find themselves needing to consider AI-based systems, would do well to understand options and best practices for model risk management, starting with understanding and generalizing the guidance offered by SR 11-7.
II. Make Transparency an Actionable Priority
Immense complexity and unavoidable statistical probabilities in ML systems makes transparency a difficult task. Alas, parties deploying—and thereby profiting from—AI can nonetheless be held liable for issues relating to a lack of transparency. Governance frameworks should include steps to promote transparency, whether preemptively or as required by industry- or jurisdiction-specific regulations. For example, the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA) mandate customer-level explanations known as “adverse action notices” for automated decisions in the consumer finance space. These laws set an example for the content and timing of notifications relating to AI decisions that could adversely affect customers, as well as establish the terms of an appeals process against those decisions. Explanations that include a logical consumer recourse process dramatically decrease risks associated with AI-based products and help prepare organizations for future AI transparency requirements. New laws, like the California Privacy Rights Act (CPRA) and the proposed EU AI rules for high-risk AI systems, will likely require high levels of transparency, even for applications outside of financial services.
Some AI system decisions may be sufficiently interpretable to nontechnical stakeholders today, like the written adverse action notices mentioned above, in which reasons for certain decisions are spelled out in plain English to consumers. But oftentimes the more realistic goal for an AI system is to be explainable to its operators and direct overseers.[2]
The import of a system that’s not fully understood by its operators is that it is much harder to identify and sufficiently mitigate risks. One of the best strategies for promoting transparency, particularly in light of the challenges around “black-box” systems that are unfortunately common in the US today, is to rigorously pursue best practices with respect to AI system documentation. This is good news for lawyers who are adept in the skill and attention to detail that is required to institute and enforce such documentation practices. Standardized documentation of AI systems, with emphasis on development, measurement, and testing processes, is crucial to enable ongoing and effective governance of AI systems. Attorneys can help by creating templates for such documentation and by assuring that documented technology and development processes are legally defensible.
III. Bias is a Major Problem—But Not the Only Problem
Algorithmic bias can generally be thought of as outputs of an AI system that exhibits an unjustified differential treatment between two groups. AI systems learn from data, including its biases, and can perpetuate that bias on a massive scale. The racism, sexism, ageism, and other biases that permeate our culture also permeate the data collected about us and in turn the AI systems that are trained on that data.
On a conceptual level, it is important to note that although algorithmic bias often reflects unlawful discrimination, it does not constitute unlawful discrimination per se. Bias also includes the broader category of unfair or unexpected inequitable outcomes. While these may not amount to illegal discrimination of protected classes, they may still be problematic for organizations, leading to other types of liability or significant reputational damage. And unlawful algorithmic bias puts companies at risk of serious liability under cross-jurisdictional anti-discrimination laws.[3] This highlights the need for organizations to adopt methods that test for and mitigate bias on the basis of legal precedent.
Because today’s AI systems learn from data generated—in some way—by people and existing systems, there can be no unbiased AI system. If an organization is using AI systems to make decisions that could potentially be discriminatory under law, attorneys should be involved in the development process alongside data scientists. Those anti-discrimination laws, while imperfect, provide some of the clearest guidance available for AI bias problems. While data scientists might find the stipulations in those laws burdensome, the law offers some answers in a space where answers are very hard to find. Moreover, academic research and open-source software addressing algorithmic bias is often published without serious consideration of applicable laws. So, organizations should take care to ensure that their code and governance practices with respect to identifying and mitigating bias have a firm basis in applicable law.
Organizations are also at risk of over-indexing on bias while overlooking other important types of risk. Issues of data privacy, information security, product liability, and third-party risks, as well as the performance and transparency problems discussed in previous sections, are all critical risks that firms should, and eventually must, address in bringing robust AI systems to market. Is the system secure? Is the system using data without consent? Many organizations are operating AI systems without clear answers to these questions. Look for bias problems first, but don’t get outflanked by privacy and security concerns or an unscrupulous third party.
IV. There Is More to AI System Performance Than Accuracy
Over decades of academic research and countless hackathons and Kaggle competitions, demonstrating accuracy on public benchmark datasets became the gold standard by which a new AI algorithm’s quality is measured. ML performance contests such as the KDD Cup, Kaggle, and MLPerf have played an outsized role in setting the parameters for what constitutes “data science.”[4] These contests have undoubtedly contributed to the breakneck pace of innovation in the field. But they’ve also led to a doubling-down on accuracy as the yardstick by which all applied data science and AI projects are measured.
In the real world, however, using accuracy to measure all AI is like using a yardstick to measure the ocean. It is woefully inadequate to capture the broad risks associated with making impactful decisions quickly and at web-scale. The industry’s current conception of accuracy tells us nothing about a system’s transparency, fairness, privacy, or security, in addition to presenting a limited representation of what the construction of “accuracy” itself claims to measure. In a seemingly shocking admission, forty research scientists added their names to a paper demonstrating that accuracy on test data benchmarks often does not translate to accuracy on live data.
What does this mean for attorneys? Attorneys and data scientists need to work together to create more robust ways of benchmarking AI performance that focus on real-world performance and harm. While AI performance and legality will not always be the same, both professions can revise current thinking to imagine performance beyond high scores for accuracy on benchmark datasets.
V. The Hard Work Is Just Beginning
Unfortunately at this stage of industry and development, there are few professional standards for AI practitioners. Although AI has been the subject of academic research since at least the 1950s, and it has been used commercially for decades in financial services, telecommunications, and e-commerce, AI is still in its infancy throughout the broader economy. This too presents an opportunity for lawyers. Your organization probably needs AI documentation templates, policies that govern the development and use of AI, and ad hoc guidance to ensure different types of AI systems comply with existing and near-future regulations. If you’re not providing this counsel, technical practitioners are likely operating in the dark when it comes to their legal obligations.
Some researchers, practitioners, journalists, activists, and even attorneys have started the work of mitigating the risks and liabilities posed by today’s AI systems. Indeed, there are statistical tests to detect algorithmic discrimination and even hope for future technical wizardry to help mitigate against it. Businesses are beginning to define and implement AI principles and make serious attempts at diversity and inclusion for tech teams. And laws like ECOA, GDPR, CPRA, the proposed EU AI regulation, and others form the legal foundation for regulating AI. However, technical mitigation attempts still falter, many fledgling risk mitigations have proven ineffective, and the FTC and other regulatory agencies are still relying on general antitrust and unfair and deceptive practice (UDAP) standards to keep the worst AI offenders in line. As more organizations begin to entrust AI with high-stakes decisions, there is a reckoning on the horizon.
Author Information
Aaina Agarwal is Counsel at bnh.ai, where she works across the board on matters of business guidance and client representation. She began her career as a corporate lawyer for emerging companies at a boutique Silicon Valley law firm. She later trained in international law at NYU Law, to focus on global markets for data-driven technologies. She helped to build the AI policy team at the World Economic Forum and was a part of the founding team at the Algorithmic Justice League, which spearheads research on facial recognition technology.
Patrick Hall is the Principal Scientist and Co-Founder of bnh.ai, a DC-based law firm specializing at the intersection of AI and data analytics. Patrick also serves as visiting faculty at the George Washington University School of Business. Prior to co-founding bnh.ai, Patrick led responsible AI efforts at the high-profile machine learning software firm H2O.ai, where his work resulted in one of the world’s first commercial solutions for explainable and fair machine learning.
Sara Jordan is Senior Researcher of AI and Ethics at the Future of Privacy Forum. Her profile includes privacy implications of data sharing, data and AI review boards, privacy analysis of AI/ML technologies, and analysis of the ethics challenges of AI/ML. Sara is an active member of the IEEE Global Initiative on Ethics for Autonomous and Intelligent Systems. Prior to working at FPF, Sara was faculty in the Center for Public Administration and Policy at Virginia Tech and in the Department of Politics and Public Administration at the University of Hong Kong. She is a graduate of Texas A&M University and University of South Florida.
Brenda Leong is Senior Counsel and Director of AI and Ethics at the Future of Privacy Forum. She oversees development of privacy analysis of AI and ML technologies, and manages the FPF portfolio on biometrics and digital identity, particularly facial recognition and facial analysis. She on privacy and responsible data management by partnering with stakeholders and advocates to reach practical solutions for consumer and commercial data uses. Prior to working at FPF, Brenda served in the U.S. Air Force. She is a 2014 graduate of George Mason University School of Law.
Disclaimer: bnh.ai leverages a unique blend of legal and technical expertise to protect and advance clients’ data, analytics, and AI investments. Not all firm personnel, including named partners, are authorized to practice law.
[1] Commentators have often used the image of Russian nesting (Matryoshka) dolls to illustrate these relationships: AI includes machine learning, and machine learning, in turn, includes deep learning. Machine learning and deep learning have risen to the forefront of commercial adoption of AI in applications areas such as fraud detection, e-commerce, and computer vision. See, e.g., The Definitive Glossary of Higher Mathematical Jargon, MATH VAULT (last accessed Mar. 4, 2021), https://mathvault.ca/math-glossary/#algo; Eda Kavlakoglu, AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference?, IBM BLOG (May 27, 2020), https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks.
[2] In recent work by the National Institute for Standards and Technology (NIST), interpretation is defined as a high-level, meaningful mental representation that contextualizes a stimulus and leverages human background knowledge. An interpretable AI system should provide users with a description of what a data point or model output means. An explanation is a low-level, detailed mental representation that seeks to describe some complex process. An AI system explanation is a description of how some system mechanism or output came to be. See David A. Broniatowski, Psychological Foundations of Explainability and Interpretability in Artificial Intelligence (2021), https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=931426.
[3] For example, The Equal Credit Opportunity Act (ECOA), The Fair Credit Reporting Act (FCRA), The Fair Housing Act (FHA), and regulatory guidance, such as the Interagency Guidance on Model Risk Management (Federal Reserve Board, SR Letter 11–7). The EU Consumer Credit Directive, Guidance on Annual Percentage Rates (APR), and General Data Protection Regulation (GDPR) serve to provide similar protections for European consumers.
[4] “Data science” tends to refer to the practice of using data to train ML algorithms, and the phrase has become common parlance for companies implementing AI. The term dates back to 1974 (or perhaps further), coined then by the prominent Danish computer scientist Peter Naur. Data science, despite the moniker, is yet to be fully established as a distinct academic discipline.
Event Report: From “Consent-Centric” Frameworks to Responsible Data Practices and Privacy Accountability in Asia Pacific
On September 16, the Asia-Pacific office of the Future of Privacy Forum (FPF) held its first event following its launch in August 2021. This event was hosted by the Personal Data Protection Commission (PDPC) of Singapore during the very popular “Personal Data Protection week” (PDP Week 2021).
The theme of the event was Exploring trends: From “consent-centric” frameworks to responsible data practices and privacy accountability in Asia Pacific, and it is that of a larger project, carried out jointly by FPF with the Asian Business Law Institute (ABLI) across 14 Asian jurisdictions. The event was also co-organized by ABLI and FPF in the context of a cooperation agreement which was signed by the two organisations in August 2021.
This post summarizes the discussions in the two stellar panels featuring regulators, thought leaders, and practitioners from across the region, and highlights key takeaways:
Consent requirements which apply to the collection and processing of personal data, “notice & choice”, exceptions and alternatives to those requirements, combined, form an area where regulatory coherence is most needed in Asia-Pacific (APAC). Over-reliance on consent has led to the development of a “tick-the-box approach” to data protection, consent fatigue, and unnecessary compliance costs due to contradictory requirements in Asia Pacific.
Modern data protection laws should shift the onus of data protection from users to organizations, by promoting an accountability-based approach to data protection over a “consent-centric” one. Different avenues may be used to rebalance consent and privacy accountability in APAC, including through concepts such as legitimate interests, compatible uses and equivalent notions.
Making consent meaningful again in APAC can happen in a variety of ways, which include winding back the range of circumstances in which consent is sought; requiring consent only where it can be given thoughtfully, sparingly and with understanding; supporting enhanced transparency and consent through UX and UI design, with due attention brought to the different needs and literacy levels of users.
Harmonization is illusory in the face of Asia’s extreme diversity, but a bottom-up approach to convergence can work in the context of regional cooperation.
1. Repositioning consent requirements in APAC’s fragmented data protection landscape
Dr. Clarisse Girot, Director of FPF Asia Pacific and ABLI Senior Fellow, opened the discussion by explaining that the issue of a comparative look at “consent” requirements across the region was chosen as a key topic following suggestions from a vast network of stakeholders. Feedback showed that consent requirements which apply to the collection and processing of personal information, the “Notice & Choice” principle, exceptions and alternatives to those requirements, combined, form an area where regulatory coherence is most needed in the Asia Pacific (APAC) region.
In practice, the cumulative application of consent requirements for data processing in the region has led to the development of a “tick-the-box approach” to data protection in many jurisdictions. However, in APAC as elsewhere, overreliance on consent as a lawful ground by organisations has led to a general consent fatigue and unnecessary compliance costs, due to contradictory requirements.
An agreement is therefore forming across Asian jurisdictions that modern data protection laws should shift the onus of data protection from users to organizations, by promoting an accountability-based approach to data protection over a “consent-centric” one. This triggers a need to relativize the role of consent and to bring it back to the place which had been initially assigned to it by the very first data protection frameworks — namely, as one among many elements in a regulatory ecosystem which generally seeks to balance the role and interests of individuals, the responsibility of organizations, and broader social and societal interests with regard to the processing of personal data.
The main goal of the workshop was therefore to identify similar discussions that are taking place in multiple jurisdictions in APAC and to explore the possibilities of convergence among them. The discussion will also feed a joint comparative study with recommendations for convergence on consent and related data protection requirements, which will be published jointly by FPF and ABLI before the end of the year.
Both panels were composed of data protection professionals from different APAC jurisdictions and disciplines. Each speaker contributed with an original and expert point of view that could help identify commonalities, pathways for interoperability between Asian data protection frameworks, and concrete solutions to provide meaningful data protection to individuals — with or without consent.
Such reflections and recommendations are particularly timely at a time when key jurisdictions in Asia, including India, Indonesia, Thailand, Vietnam, Hong Kong SAR, Malaysia, Australia, are adopting new data protection frameworks or amending their laws, and new laws or major amendments recently came into force in jurisdictions like Thailand, Korea, New Zealand, China, or Singapore.
2. Rebalancing consent and privacy accountability
The title of the first panel was “Switching from a consent-centric approach to privacy accountability: a comparative view of APAC data protection laws”.
The panel was moderated by Yeong Zee Kin, Assistant Chief Executive, Infocomm Media Development Authority (IMDA), and Deputy Commissioner, PDPC, Singapore, with the inputs of Peter Leonard, Principal and Director at Data Synergies, Sydney, Takeshige Sugimoto, Managing Director at S&K Brussels, Tokyo, Shinto Nugroho, Chief Public Policy and Government Relations at Gojek, Jakarta, and Marcus Bartley-Johns, Asia Regional Director, Government Affairs and Public Policy at Microsoft, Singapore.
The goal of this first panel was to identify commonalities and pathways for interoperability between Asian data protection frameworks with regard to balancing the protection of individuals, accountability, and broader social and societal interests. This includes the role of consent, lawful grounds to process personal data, and/or other privacy principles in jurisdictions which do not contain provisions on “lawfulness” of processing.
The most important points highlighted during the discussion were the following:
2.1 How to achieve convergence across APAC’s fragmented and diverse landscape?
As an introductory note, Yeong Zee Kin stressed that APAC jurisdictions take different approaches towards privacy and data protection, but also that their laws are in different stages of developments (e.g., Japan and South Korea have had privacy laws for a long time, while Singapore, Philippines and Malaysia are more recent players). One may add that data protection or privacy laws follow different structures and not all modelled on EU GDPR, hence some key provisions (e.g. on “lawfulness” of data processing) have no equivalent in other jurisdictions.
A challenge which is endemic in APAC is therefore to identify a common ground in order to achieve convergence, while respecting the different inspirations and the particular culture that are enshrined in each jurisdiction’s privacy laws.
This raises a key question for participants, which is whether APAC stakeholders should aim for harmonisation or for more targeted actions of convergence, for instance through the mutual recognition of specific legal standards.
2.2 Over-reliance on consent and need for alternatives
Speakers highlighted that APAC-based organisations tend to overly rely on consent, even in cases where another solution or legal basis would be available and more appropriate. The potential consequence of such a practice is the erosion of the value of consent.
A view expressed by Peter Leonard and shared across the panel was that consent, “informational self-determination”, or “citizen self-management” of privacy settings, remains important. However, anyone should only be expected to self-manage what is realistically manageable by them. The need is felt to address both the frequency of consent requests and reduce the level of noise in privacy policies and collection notices, as well as rethink the role of privacy policies and collection notices.
Among “noise reduction measures”, he specifically cited appropriately targeted exceptions, whether through legitimate interests, industry codes or standards, class exemptions by regulators, or new generic concepts such as “compatible data practices”, in such a way that the control of individuals over their personal data is not adversely affected. As a baseline, moving away from consent requires recognizing the importance of concepts like “reasonableness” or “fairness” to support the alternative requirements of data protection laws.
Unambiguous express consent should remain necessary for categories of processing that create higher risk of privacy harm to individuals, in particular for manifestly sensitive data, including data about children, processing which directly contradict individuals’ rights and interests, or cannot reasonably be expected by them. This may also tie in with the concept of “no-go zones” as it has been developing in Canada, and which has gained some popularity in Australia.
2.3 Varying approaches and interpretations in different jurisdictions: Japan, Indonesia, Vietnam
Another point raised by the moderator and panellists was that material differences in the protections awarded by legal systems in APAC countries may hinder the path towards harmonisation. There is therefore a need to better understand how each law works before proposing solutions for convergence, so that they can be meaningful for all.
Takeshige Sugimoto commented on the “consent by default” situation which currently prevails in Japan. He noted that the Japanese data protection law (APPI) does not have a “legitimate interest” legal basis, but that—contrary to a common belief—it does not take a consent-centric approach either. Rather it permits processing of personal data based on the “business necessity” ground, as long as the data subject may reasonably expect the intended further usage of his or her data. The boundaries of permissible processing under APPI are therefore similar to GDPR, even without “legitimate interests” as a legal basis. In its adequacy decision on Japan, the European Commission actually states that the Japanese system also ensures that personal data is processed lawfully and fairly through the purpose limitation principle.
Sugimoto also mentioned the Japanese Personal Information Protection Commission (PPC Japan) guidelines, which list limited cases where consent must be sought, while pointing to other areas which are open to other legal bases and authorisation from the PPC. In other words, in his view there would be no significant difference, in practice, between what GDPR considers legitimate interests-based processing, and what APPI considers lawful processing.
Shinto Nugroho presented the situation in ASEAN from the perspective of Gojek, Indonesia’s first decacorn and SuperApp, with operations in Indonesia, Vietnam, Singapore, Thailand, and Philippines. Nugroho’s particular focus was on the challenges of operationalizing consent in times of crisis, like in the current Covid-19 pandemic. She noted that in its current state Indonesia’s data protection legislation is quite consent-centric, but that the draft Data Protection Bill to be soon adopted by the Parliament of Indonesia mentions consent as only one of seven available lawful grounds for processing personal data (others including contract, performance of a legal obligation and legitimate interests).
Nugroho welcomed this development. She explained how consent as a legal ground is neither always practical for controllers nor protective for individuals, and in fact sometimes even harmful for citizens. For instance, in Indonesia, out of 170 million inhabitants roughly 160 millions are eligible for vaccination against Covid. Gojek has secured massive vaccination slots from the governments, namely for its drivers who are frontliners. However, the government requires that everyone be registered in the public vaccination system first, for which consent is required. But not everyone has access to the Internet or has the literacy required to get registered; moreover, the vaccination register itself is work in progress. Securing 100% opt-in consent from millions of drivers to be registered in the scheme is not only going to slow down the process, but the drivers are also going to miss the notification, or fail to complete their registration. In such cases, for Gojek the most adequate legal basis to get the drivers registered would be its “legitimate interests” as an employer, together with clear purposes, and adequate transparency over mere consent. The consideration that drivers are exposed to a high risk of contamination at a time when the epidemic is hitting the country should override the need to obtain consent.
Lastly, Nugroho mentioned the ongoing discussions on the future Data Protection Decree of Vietnam, to be adopted imminently. The Decree does not provide for a legitimate interests basis, but at least similarly allows controllers to collect and process data on grounds other than consent (such as security, when permitted under the law, and research). Discussions on convergence must therefore factor in the fact that APAC data protection laws can vary even between neighbouring countries which have commonly drawn inspiration from similar sources (primarily EU GDPR) to draft their future comprehensive data protection frameworks.
2.4 Transparency & choice as trust enablers
Marcus Bartley-Johns welcomed the fact that the discussions enabled to introduce nuances in the conversation, because “making consent meaningful again” is a journey and for that we must avoid binary approaches (“for, or against consent”). He also concurred with Takeshige Sugimoto that laws and regulations can go in one direction, but business practices and embedded behaviors can go in another, and these variations are a key part of the discussion around consent.
Bartley-Johns shared a few data points on what consent means in the region. In 2019, Microsoft ran a survey on 6300 consumers across Asia on consumer perception of trust; 53% of the persons surveyed said that they had had a negative trust experience related to privacy when using a digital service in the region. Younger people reported a higher share of negative experience, and more than half of those said they would switch services if their trust was breached. Bartley-Johns added that the fact that consumers have reasons to be wary should be acknowledged, one of those reasons being the excessive difficulty for individuals to find out and understand how their data is being collected and used.
Another data point is in relation to the use of the privacy dashboard which enables Microsoft users globally to see and control their activity data including location, search, browsing data across multiple services. 51 million unique visitors have used that dashboard since its launch in May 2018 (19 million people in 2020). Japan, China, Australia, India and Korea feature in the top 20 markets from where users have been using the dashboard. In other words, the speaker stated that Microsoft’s experience shows that consumers wish to know what personal data is collected about them and exercise their options and rights when they are given an opportunity to have their say.
Following up on this point, Peter Leonard added that transparency plays a double role: on the one hand, it allows individuals to know how their data are being used, while at the same time provides safeguards against deceptive and manipulative statements by organisations, where appropriate “do only what you say” laws are in place at a local level.
2.5 “Legitimate interests” in context
On the whole, all the speakers expressed their support for the development of the concept of legitimate interests or equivalent concepts in APAC laws. The adoption across more privacy laws of alternative grounds for processing personal data, notably legitimate interests, is one of the potential areas for strengthening privacy regulatory coherence in the region. Microsoft for instance has advocated for this necessity in a recent policy paper calling for strengthening privacy regulatory coherence in Asia.
Speakers noted that a problem in APAC of increased reliance upon legitimate interests as an alternative for consent is that lists of legitimate interests are varied and jurisdiction-specific. This means that entities operating across borders and seeking a common denominator in their privacy policies and requests for consent will continue to be incentivised to overly rely upon consent, unless they are given some certainty on where the lawmakers and regulators are likely to use this notion. Convergence can be strengthened by the adoption of regulatory guidelines on implementing this approach and information sharing on their implementation.
Peter Leonard contributed by stating that, to make the legitimate interests lawful ground work in APAC, there could be a need for a mutual recognition scheme in the region of the differing definitions and approaches to the legitimate interests. According to him, this will not lead to absolute convergence, but will allow reaching a compromise that takes stock of local legal systems and cultures in diverse Asia. Failing this, we will have data controllers who will keep using consent as a common denominator by default.
In the view of Takeshige Sugitomo, having a compilation of use cases that clarify whether legitimate interests or consent would be the most appropriate legal basis in each case, would help achieve a more holistic regional approach. This could lead to international consensus on specific use cases which would be more efficient than awaiting joint regulatory guidance which might take years to be issued.
Marcus Bartley-Johns suggested that it would be valuable to check if the consensus that emerged from this panel could emerge in the regional and global regulatory community. This is important as more regulations and guidance have emerged in the last months in Asia which tends to make transparency or consent requirements even more prescriptive. In this respect, there would be real value in obtaining practical guidance from regulators on these issues, like PDPC has done, with indicative examples, use cases and scenarios that will give a basis for a more holistic approach to balancing consent and other approaches in the region.
Seconding the comments by Sugitomo and Bartley-Johns, Yeong Zee Kinindicated that one of the sources of inspiration for drafting PDPC’s guidelines on legitimate interests in the recently amended PDPA was the FPF’s report on legitimate interests in the EU, which provides a compilation of guidance or decisions by regulators and court cases where the scope of the legitimate interests lawful ground was clarified. He suggested that the right way forward would probably be to identify real world examples and use cases where a regional or global consensus can be reached on situations where we do not need consent, and the next step will be for regulators to start contextualizing the end result depending on their respective legal systems (necessity, reasonableness, legitimate interests, contractual necessity, vital interests, etc.).
The moderator suggested that FPF and other stakeholders contribute to building this library of “legitimate interests”, and that regulators could do their part by going out to their local industry and looking for such use cases. Subscribing to a remark by Peter Leonard, however, he acknowledged that in the broad spectrum of different cultures and histories in Asia, complete harmonization is not realistic. In contrast, taking a practical bottom-up approach to convergence might get us somewhere and we should seek to build on consensus as and when we find them, for instance bilaterally, between like-minded partners, and maybe more slowly, on a regional level.
3. Making consent meaningful (again)
The title of the second panel was “Shaping choices in the digital world: how consent can become meaningful again”. The panel was moderated by Rajesh Sreenivasan, Head, Technology Media and Telecoms Law Practice, Rajah & Tann Singapore LLP. It further included interventions by Anna Johnston, Principal, Salinger Privacy (Sydney), Malavika Raghavan, Visiting Faculty, Daksha Fellowship and FPF Senior Fellow for India, Rob van Eijk, FPF Europe Managing Director and Edward Booty, Founder and CEO of reach52.
Rajesh Sreenivasan started by saying that the problem with consent lies not on the concept itself but on the way this legal ground has been used for processing personal data. Especially in APAC, where multiple jurisdictions have very different approaches, he mentioned that obtaining meaningful consent requires answering two questions first: 1) Meaningful consent for whom: for the data subject or for the organisation?, and 2) Meaningful how? Additionally, the moderator openly asked participants whether in their view it was more pressing to make consent meaningful or to build alternative models for fair data processing, as consent might have become redundant in today’s context, at the speed at which data is being used.
3.1 Are current online consent-seeking practices fair?
Anna Johnston kicked off by supporting a burden shift, away from individuals and onto organisations, when it comes to consent standards. According to her, consent has almost lost its true meaning because it has been so over-used as a promise — in her own words, it has become like “your cheque is in the mail”!
The situation in Australia as she sees it is that consent is over-relied on, but also under-enforced. There is guidance from the Australian Privacy Commissioner (the OAIC) and there is case-law to back up that guidance, that consent in Australian law is similar to the GDPR: it cannot be bundled up with other things, it cannot be included in mandatory Terms and Conditions, in a Privacy Policy, it cannot even be “opted out” – consent as a lawful basis on which to collect, use or disclose personal information has to be the customer’s clear “opt in” choice, made freely, separate from all other choices. However the law is under-enforced, and so it is still very common to see business practices which follow a model of “bury the customer in fine print and make them agree to something we know they won’t even read”, and then claim that the customer has consented to something.
Surveys conducted by the OAIC actually suggest that only 20% of Australians feel confident that they understand privacy policies when they are actually reading them. Recently the Australian consumer and competition regulator, the ACCC, has called out this kind of power imbalance and these kinds of behaviours from the Big Tech platforms, and recommended that the Privacy Act should be amended, to make the standard required for consent much clearer in the law.
3.2 The boundaries of consent’s role
Overall, speakers agreed that there is a need to “make consent meaningful again”, primarily by winding back the range of circumstances in which consent is sought by organisations. Consent should only be required, and sought, where it can be given thoughtfully, sparingly and with understanding. Consent is only real consent where an individual has a real choice [note: an increasing number of data protection laws in Asia recognize the concept of “free and unbundled consent”]. A discussion is needed about when requiring consent is sensible, and how to ensure that capabilities of individuals to control their privacy settings are not compromised by any changes in consent requirements.
Winding back such requirements, to improve data privacy, may sound both radical and counter-intuitive. However, over both sessions an agreement has been formed that processing without consent should only be recommended if the processing is aligned with the ordinary expectations or direct interests of data subjects, and without ever overriding a requirement for transparency.
Anna Johnston thus opined that there should be clear distinction between business activities that require or do not require consent. For example, activities that are outside customers’ expectations should require consent (eg. asking someone to join a research project), whereas the same would not apply to unobjectionable, fair and proportionate activities (such as including an individual in a customer database) nor to others with public interest backing. She concluded by adding that there should also be a list of activities that are prohibited even if consent is given, including cases of profiling children for marketing purposes.
In his presentation, FPF’s Manager for Europe Dr. Rob van Eijk concurred and added that a lot of the debate on the consequences of the datafication of society has been around limiting the collection of data but also on its further use. Consent is one of the ways in which to regulate these two “gateways”, and if we look at that there are multiple ways in which we can ensure that everyone is on board. In practice, however, much of the burden is on the users to read and comprehend what is being put forward. This aspect was the key focus of this year’s Dublin Privacy Symposium organized by FPF, entitled Designing for Trust: Enhancing Transparency & Preventing User Manipulation.
An important point made during the symposium is that organisations should be proactive in increasing transparency from a design perspective so as to present users with a real choice and encourage them to make deliberate decisions. Understandability, how people read through the information, for instance, can be tested through technology in the online space. Another important point is that organizations should ask themselves whether they should be collecting all the envisaged data in the first place (in line with the minimisation principle).
They must also take active steps to prevent user manipulation not only in designing consent solutions (for instance through cookie banners), but also when they process data through machine learning algorithms. Finally, the question of vulnerable groups should be factored in the design of UX/UI (“have we left any groups behind?”). A lot can be done to make things more understandable. And this of course leads to the question of the extent to which the expression of choice can be embedded in the technology.
3.3 Dealing with users with different needs and literacy levels in APAC
The Asia Pacific region is a region of contrasts, especially in terms of literacy levels, including financial and health literacy, due among other reasons to different educational levels and the wide linguistic variety that exist in some countries.
FPF’s Malavika Raghavan made comments and shared findings issued from her research and extensive field work done in India, to explore how the mental models of internet users in India impact these discussions on consent, with a particular focus on the financial sector (eg. loan applications). She underlined the importance of understanding the context of non-Western users, particularly new generations of users in Asia, before even aiming at the design of laws and practices for obtaining meaningful online consent.
For instance, Raghavan pointed to surveys that showed that many mainstream Indian users – i.e. modest-earning individuals from primarily rural areas – do not understand the differences between their mobile phones, the internet, online services and allied services like payment platforms, because they exclusively use them on their phones. Understanding this reality (how users have never used a computer, but only mobile phones with preloaded apps, free allowances, etc.) is key to start thinking about designing consent, or even policymaking around consent.
However, literacy is not necessarily a barrier, and it is not related to digital skills: highly proficient digital users might not be literate, and reciprocally. Moreover, a large number of Indian families often share their mobile devices, which means that consent in those scenarios should be considered given for a group of individuals rather than separate individuals: this mental model is very far from the mental models of a designer or policymaker. Asking for one-to-one consent in such circumstances might not make sense. But however disadvantaged, individuals still have strong ideas about how their data may be shared.
The limitations of consent have been analysed by Raghavan in particular in her work on the Data Empowerment and Protection Architecture (DEPA) and the Consent Layer developed by Indiastack, which seeks to enable secure and effective sharing of personal data with third party institutions in India by using the concept of “consent managers”. Raghavan highlighted in her work how cognitive limitations operate on individuals’ decision-making about their personal data and how the threat of denial of service can make “taking consent” a false choice. To be effective, therefore, such systems must be supported by strong accountability systems and access controls that operate independent of consent. Relying solely on consent is not a good idea, as a wealth of data protection and consumer protection thinking has shown that consent is necessary but not sufficient for data protection.
Moreover, the panellist concluded, coders and digital platform designers should consider users’ perceptions, literacy and context when setting up online services. The law alone cannot fix what has been broken by technology. This, according to Raghavan, is particularly important in a jurisdiction where the highest judicial instances have recognized privacy as a fundamental right (such as India) and where users have strong ideas and reasonable expectations about how digital data flows occur. In said exercise, unbundling ancillary consent-needing data processing from online services’ terms and conditions should be front and center.
Edward Bootythen shared his experience as CEO of Reach52, a social enterprise and a growth-focused start-up that provides accessible and affordable healthcare for the 52% of the world without access to health services, with 5 key markets in Cambodia, Philippines, India, Indonesia, and Kenya.
Reach52 uses technology and community outreach to widen access to health services while simultaneously lowering their costs. Booty explained that his company is still small, but has accumulated a lot of sensitive data in the multiple countries in which they operate. He specifically shared about his experience in collecting health data and profiling residents for providing better care in remote rural communities in Philippines and Cambodia, and uncovering data-driven insights to inform more targeted, effective access to healthcare solutions. Although it is sometimes disheartening that some users do not care, not having legitimate consent from users in a data-driven business model constituted a risk to his start-up. Furthermore, reach52 still believes that it must help the people who use their services understand their rights around data collection and use, regardless of their education and literacy levels. Booty explained how consent was sought from individuals who provided their data for such a purpose, using video, visuals, and progressive disclosure, paying attention to the way terms are explained, and consent gained, so as not to fall short for people with low literacy and education levels. For this, support was obtained from Facebook accelerator and IMDA.
A specific challenge explained by Booty is that local and national government authorities were then coming to reach52 to obtain access to the datasets for a variety of purposes, notably to manage different humanitarian crises. The speaker shared that, as pressure from those authorities mounted, the organisation started working on ways to get more meaningful and granular consent from individuals for each of the needs that their data could serve. This involved engaging designers to deliver simple flyers with information to individuals about what could happen to their data after its collection, as well as about their data-related rights. The process included testing with different age groups to make the message intelligible for a wide audience.
3.4 How UX and UI can support enhanced transparency and consent
During the session, several times the idea was brought up that designers, and the improvement of the user experience and user interface (UX/UI), have an essential role to play in improving the regulation of architectures of choice.
In recent years, more academics and data protection regulators have underlined the fundamental role which UX/UI design can play for user empowerment and that design and interfaces must now make part of the compliance analysis. Universally-accepted icons could be a possible solution to improve intelligibility, said Anna Johnston. In her presentation, she argued that web designers should try to think with the mind of a user, by considering useful evidence and guidance on how to better design privacy notices, such as the UK Government’s piece on better privacy notice design.
Various ideas for improving privacy notices are modelled on successful designs used in safety messaging (like traffic light indicators), and product labelling (such as star ratings and nutrition labels). But this form of notice still does not work at scale. Anna Johnston expressed the view that the most innovative idea she has seen in this space comes from Data61, which is an arm of the Australian Government that has proposed machine-readable icons, based on Creative Commons icons from copyright law – these are universally agreed, legally binding, clear and machine-readable.
This latter suggestion was echoed by the findings of FPF’s Dublin Privacy Symposium on manipulative design practices, which were outlined by Dr. Rob van Eijk during the session. According to him, the Symposium’s speakers explained that providers should encourage users to make deliberate decisions online by avoiding so-called “dark patterns”, consider the needs of vulnerable groups (such as visually impaired or colour-blind users) and the best way of informing users where data collection devices do not have visual or audio interfaces (eg. IoT). Van Eijk added that cookie walls as they are developing in Europe may be a radical solution, as they prevent users from accessing content unless they agree to pay a fee or accept online tracking.
Conclusion
Commissioner Raymund Liboro, National Privacy Commissioner of Philippines, delivered the concluding remarks of the workshop.
To support the work of the FPF and ABLI and the discussions of the day, Commissioner Liboro evoked a topical case in the Philippines. In late August, his office ordered the take-down of money lending apps from the Google Play Store to sanction the practice of some online lending platforms. Such platforms harvested excessive information from their users without legitimate purpose through the use of unreasonable and unnecessary apps permissions, including saving and storing their clients’ contact list and photo gallery ostensibly to evaluate their creditworthiness. Yet an applicant’s creditworthiness may be determined through other lawful and reasonable means. Moreover, these apps have also been the subject of more than 2000 complaints of unauthorized use of personal data that resulted in harassment and shaming of borrowers before persons in their mobile devices’ contact list to collect debts.
Such behaviors and practices cannot be considered acceptable because users have supposedly given their “legitimate consent” to them, which was the companies’ first line of defence. This, Commissioner Liboro said, combined with the privacy paradox, urges the data protection community to reconsider the current regulatory paradigm which operates in Asia and globally. As policymakers now regulate in hyperscale – with encompassing laws coming up in China, India, Indonesia, Thailand, and so many countries in ASEAN hopping on, impacting millions of data subjects –, the current dependence on consent and paper compliance should be replaced with accountability and added onus on organisations to ensure and demonstrate compliance. Privacy accountability is a compelling force, and accountable organisations foster trust and thrive, said the Commissioner.
The workshop set the scene and informed the discussion around consent and accountability in the APAC jurisdictions. All participants agreed on the need to reconsider the use of the consent legal ground in the region. The datification of society as well as the global dimensions of privacy and data protection promise to urge policy makers to aim for convergence, while respecting the legal culture and approach of each separate jurisdiction.
Commissioner Liboro concluded the event by expressing his appreciation to everyone who participated in the discussions, and reminded the participants that this conversation aims at setting the foundations of a collective response that will benefit the privacy ecosystem in the Asia-Pacific region.
The next steps of the FPF ABLI project will be announced soon.
Brain-Computer Interfaces: Privacy and Ethical Considerations for the Connected Mind
A forthcoming FPF and IBM report focusing on BCI privacy and ethics will be published in November 2021.
FPF-curated educational resources, policy & regulatory documents, academic papers, thought pieces, and technical analyses regarding brain-computer interfaces are available here.
Introduction
Brain-computer interfaces (BCIs) are a prime example of an emerging technology that is spawning new avenues of human-machine interaction. Communication interfaces have developed from the keyboard and mouse to touchscreens, voice commands, and gesture interactions. As computers become more integrated into the human experience, new ways of commanding computer systems and experiencing digital realities have trended in popularity, with novel uses ranging from gaming to education.
Defining BCIs and Neurodata
BCIs are computer-based systems that directly record, process, analyze, or modulate human brain activity in the form of neurodata that is then translated into an output command from human to machine. Neurodata is data generated by the nervous system, composed of the electrical activities between neurons or proxies of this activity. When neurodata is linked, or reasonably linkable, to an individual, it is personal neurodata.
BCI devices can be either invasive or non-invasive. Invasive BCIs are installed directly into—or on top of—the wearer’s brain through a surgical procedure. Today, invasive BCIs are mainly used in the health context. Non-invasive BCIs rely on external electrodes and other sensors or equipment connected to the external surface of the head or body, for collecting and modulating neural signals. Consumer-facing BCIs primarily use various non-invasive methods, including headbands.
Key Applications and Top-of-Mind Privacy and Ethical Challenges
Some BCI implementations raise few, if any, privacy issues. For example, individuals using BCIs to control computer cursors might not not reveal any more personal information than typical mouse users, provided BCI systems promptly discard cursor data. However, some uses of BCI technologies raise important questions about how laws, policies, and technical controls can safeguard inferences about individuals’ brain functions, intents, or emotional states. These questions are increasingly salient in light of the expanded use of BCIs in:
Gaming – where BCIs augment existing gaming platforms and offer players new ways to play using devices that record and interpret their neural signals.
Employment – where BCIs monitor workers’ engagement to improve safety during high-risk tasks, alert workers or supervisors of dangerous situations, modulate workers’ brain activity to improve performance, and provide tools to more efficiently complete tasks.
Education – where BCIs can track student attention, identify students’ unique needs, and alert teachers and parents of student learning progress.
Neuromarketing – where marketers incorporate the use of BCIs to intuit consumers’ moods, and to gauge product and service interest.
Military – where governments are researching the potential of BCIs to help rehabilitate soldiers’ injuries and enhance communication.
It is important for stakeholders in this space to delineate between the current and near future uses and the far-distant notions depicted by science fiction creators. The realistic view of capabilities is necessary to credibly identify urgent concerns and prioritize meaningful policy initiatives. While the potential uses of BCIs are numerous, BCIs cannot at present or in the near future “read a person’s complete thoughts,” serve as an accurate lie detector, or pump information directly into the brain.
As BCIs evolve and are more commercially available across numerous sectors, it is paramount to understand the real risks such technologies pose. BCIs raise many of the same risks posed by home assistants, medical devices, and wearables, but implicate new and heightened risks associated with privacy of thought, resulting from recording, using, and sharing a variety of neural signals. Risks include, but are not limited to:
Collecting, and potentially sharing, sensitive information related to individuals’ private emotions, psychology, or intent;
Combining neurodata with other personal information to build increasingly granular and sensitive profiles about users for invasive or exploitative uses, including behavioural advertising;
Making decisions that significantly impact patients, employees, or students based on information drawn from neurodata (with potential but distinct risks if the conclusions are accurately, or inaccurately drawn);
Security breaches compromising patient health and individual safety and privacy;
A lack of meaningful transparency and personal control over individuals’ neurodata; and
Surveilling individuals based on the collection of sensitive neurodata, especially from historically and heavily surveilled communities.
These technologies also raise important ethical questions around fairness, justice, human rights, autonomy, and personal dignity.
A Mix of Technical and Policy Solutions Is Best for Maximizing Benefits While Mitigating Risks
To promote privacy-protective and ethical uses of BCIs, stakeholders should adopt technical measures including but not limited to:
Providing hard on/off controls whenever possible;
Providing granular user controls on devices and in companion apps for managing the collection, use, and sharing of personal neurodata;
Operationalizing best practices for security and privacy when storing, sharing, and processing neurodata including:
Encrypting sensitive personal neurodata in transit and at rest; and
Embracing appropriate security measures to combat bad actors.
Stakeholders should also adopt policy safeguards including but not limited to:
Rethinking transparency, notice, terms of use, and consent frameworks to empower users with a baseline of BCI literacy around the collection, use, sharing, and retention of their neurodata;
Engaging IRBs, corporate review boards, ethical oversight, and other independent review mechanisms to identify and mitigate risks;
Facilitating participatory and inclusive community input prior to and during BCI development and rollout;
Creating dynamic technical, policy, and employee training standards to account for the gaps in current regulation; and
Promoting an open and inclusive research ecosystem by encouraging the adoption, where possible, of open standards for the collection and analysis of neurodata and the sharing of research data under open licenses and with appropriate safeguards in place.
Conclusion
Because the neurotechnology space is especially future-facing, developers, researchers, and policymakers will have to create best practices and policies that consider existing concerns and strategically prioritize future risks in ways that balance the need for proactive solutions while mitigating misinformation and hype. BCIs will likely augment and complicate many existing technologies that are currently on the market, and privacy professionals will have to stay abreast of recent developments to protect this quickly growing space.
*Image courtesy of Gerd Altmann from Pixabay
Call for Nominations: 12th Annual Privacy Papers for Policymakers
The Future of Privacy Forum invites privacy scholars and authors with an interest in privacy issues to submit finished papers to be considered for FPF’s 12th annual Privacy Papers for Policymakers Award. This award provides researchers with the opportunity to inject ideas into the current policy discussion, bringing relevant privacy research to the attention of the U.S. Congress, federal regulators, and international data protection agencies.
The award will be given to authors who have completed or published top privacy research and analytical work in the last year that is relevant to policymakers. The work should propose achievable short-term solutions or new means of analysis that could lead to real world policy solutions.
FPF is pleased to also offer a student paper award for students of undergraduate, graduate, and professional programs. Student submissions must follow the same guidelines as the general PPPM award.
We encourage you to share this opportunity with your peers and colleagues. Learn more about the Privacy Papers for Policymakers program and view previous year’s highlights and winning papers on our website.
FPF will invite winning authors to present their work at an annual event with top policymakers and privacy leaders in February 2022 (date TBD). FPF will also publish a printed digest of the summaries of the winning papers for distribution to policymakers in the United States and abroad.
Learn more and submit your finished paper by October 15th, 2021. Please note that the deadline for student submissions is November 5th, 2021.
Upcoming data protection rulings in the EU: an overview of CJEU pending cases
There has been a surge in questions posed by national courts to the Court of Justice of the EU (CJEU) in the past year on how various provisions of the General Data Protection Regulation (GDPR) should be interpreted and applied in practice. They vary from understanding essential aspects of the fundamental right to the protection of personal data, such as the scope of one’s right to access their own data or the appropriate lawful ground for complex processing like profiling and personalized advertising, to systemic questions such as the interplay of competition law and data protection law in digital markets. They also seek to dispel enforcement conundrums, such as identifying and quantifying non-material damages for breaches of the GDPR or clarifying the ne bis in idem principle for cases under the parallel purview of Data Protection Authorities and national courts.
According to the EU Treaties, EU Member-States’ courts may – or, in case no appeal from their decisions is possible, must – ask the CJEU to rule on the interpretation and validity of disputed provisions of EU law. Such decisions are known as preliminary rulings, by which the CJEU expresses its ultimate authority to interpret EU law and which are binding for all national courts in the EU when they apply those specific provisions in individual cases.
Since May 2018 – when the GDPR became applicable across the EU -, the CJEU has played an important role in clarifying the meaning and scope of some of its key concepts. For instance, the Court notably ruled that two parties as different as a website owner that has embedded a Facebook plugin and Facebook may be qualified as joint controllers by taking converging decisions (Fashion ID case), that consent for online data processing is not validly expressed through pre-ticked boxes (Planet49 case) and that the European Commission Decision to grant adequacy to the EU-US Privacy Shield framework is invalid as a mechanism for international data transfers, and supplemental measures may be necessary to lawfully transfer data outside of the EU on the basis of Commission-vetted model clauses (in the Schrems II case).
Ever since the enactment of the 1995 EU Data Protection Directive, the CJEU had a prominent role in expanding the scope of protection afforded to individuals by data protection law, in a way that ultimately influenced the text of the GDPR. Some notable examples include landmark rulings on the definition of personal data (in Breyer and Nowak), the lawfulness of transferring data to countries outside of the EU (in Schrems I) and the so-called “right to be forgotten” (in Google Spain).
What are the questions that the Court is asked to clarify next? This overview includes a preview of the most interesting cases where the CJEU is expected to weigh in. The analysis focuses on questions that are relevant from the perspective of commercial data use, meaning that novel questions about personal data processing in the context of law enforcement, passenger name records and national elections have not been included in the overview. Table 1 below contains a list of links to the relevant cases as submitted to the CJEU, allowing for a more comprehensive view.
1. Clarifying essential aspects of personal data protection: right of access; lawful grounds for processing data for targeted advertising
Both the very active Austrian Supreme Court of Justice (Oberster Gerichtshof) and the Austrian Federal Administrative Court (Bundesverwaltungsgericht) have sent questions to the CJEU about the information that controllers are required to hand over in response to data subjects’ access requests.
In March 2021, the former asked the EU’s highest court, in a case involving the Austrian Postal Office, whether under their right of access data subjects must be informed about the categories of recipients of their personal data even in the cases where specific recipients have not yet been determined, but disclosures to those recipients are planned for the future. Or should they only be informed about the categories of recipients with whom personal data was already shared?
More recently, in August 2021, the Federal Administrative Court sought clarifications from the CJEU regarding what obtaining a “copy of the personal data undergoing processing” means. In this respect, the Bundesverwaltungsgericht asks whether such a right entails receiving entire documents/database excerpts in which the personal data are included or a mere “faithful reproduction” of the personal data processed by the controller. If the latter is the case, the referring court also wishes to know if there are exceptions to the rule, for the benefit of data subjects’ comprehension. Lastly, Austrian judges also query whether the information that should be made available to data subjects in a “commonly used electronic format” is only the “copy” of the personal data or also all the elements of Article 15(1) GDPR (e.g., information about the purposes of the processing and data retention periods).
Two very complex sets of questions in cases involving processing of data for targeted advertising purposes on social media have also reached the CJEU in 2021. The Court’s answers are likely to shape the future of how social media companies and online advertising businesses process personal data in the EU.
The first one, from April, comes from the Higher Regional Court of Düsseldorf, Germany (Oberlandesgericht Düsseldorf), in a first case of its kind, combining antitrust and data protection enforcement (see also Section 3 below). In a case involving Facebook, the German court asks the CJEU if data collection through user interfaces placed on third party websites or apps that relate to Article 9(1) GDPR protected attributes (e.g., political party or health-related outlets) counts as processing special categories of data. Should people that visit such websites or apps or use the company’s plugins therein (e.g. “Like” buttons) be considered to have manifestly made their sensitive data public? As the European Data Protection Board (EDPB) has already provided guidance on these matters, it will be interesting to see to what extent the CJEU will endorse the EDPB’s interpretation or diverge from it.
The Oberlandesgericht Düsseldorf also seeks to clarify whether personal data may be lawfully collected and combined by the company when obtained from other Facebook Group services and third-party websites/apps to offer personalised content and advertising, under the “contract” or “legitimate interests” legal bases. In parallel, the court asks the CJEU to rule on whether GDPR-compliant consent may be effectively and freely expressed by users “to a dominant undertaking”.
These last questions resemble others that were posed more recently by the Austrian Oberster Gerichtshof. On July 20, 2021, the court essentially asked the CJEU (see an unofficial translation of the questions) to clarify whether the social media platform can rely on “contract” as lawful ground for processing personal data for personalized advertising, or whether it should rely on consent of its users under the GDPR (by asking against which of these two lawful grounds should the wording of its terms and conditions be assessed).
In addition to the consequential question about the appropriate lawful ground in this particular context, the Austrian court also invited the CJEU to clarify how the data minimisation and purpose limitation principles as provided by the GDPR should apply in the context of personalised online advertising, in particular when it comes to sensitive data.
2. Accountability and due diligence
The German Federal Labour Court’s (Bundesarbeitsgericht) reference of October 2020 invites the CJEU to shed light on the circumstances that may lawfully lead organisations to dismiss their appointed Data Protection Officers (DPOs). The EDPB DPO guidelines state that “a DPO could still be dismissed legitimately for reasons other than for performing his or her tasks as a DPO (for instance, in case of theft, physical, psychological or sexual harassment or similar gross misconduct)”. With this reference, the German court seeks to understand whether the CJEU shares the same view and, if so, whether Article 38(3) GDPR would preclude a German provision that forbids employers from terminating the employment relationship with their DPOs in all cases, including for reasons other than the performance of the latter’s tasks.
Additionally, the referring court asks the CJEU whether the GDPR limitations on dismissal also apply to those DPOs who are appointed pursuant to a domestic law obligation, where the GDPR itself does not require their appointment.
Looking at a different accountability-related obligation, in a June 2021 reference, the Bulgarian Supreme Administrative Court (Varhoven administrativen sad) wishes to know if the mere occurrence of a data breach is sufficient to ascertain that the controller has not implemented appropriate technical and organisational measures to prevent the breach. In case of a negative answer, the CJEU is asked to further provide a benchmark against which national courts may assess the appropriateness of the implemented measures.
3. Administrative enforcement: How far can DPAs go and do antitrust authorities play a role?
In a set of questions from March 2021, the Budapest Regional Court (Fővárosi Törvényszék) aims to ascertain how far the GDPR-prescribed independence and corrective powers of Data Protection Authorities (DPAs) go. While it seems to be clear that individuals and companies have a right to lodge a judicial appeal against DPAs’ decisions or their inaction (see Article 79 GDPR), the Hungarian court highlights situations where both DPAs and courts are simultaneously called by individuals to assess the lawfulness of the same data processing operations.
Should DPAs have priority competence to determine GDPR infringements? Or should both DPAs and Courts independently examine the existence of an infringement, possibly arriving at different conclusions? May a DPA find a GDPR breach where, in parallel proceedings, a court has found that there was no such breach? The CJEU is thus expected to clarify how the ne bis in idem principle manifests under the complex enforcement system of the GDPR.
In another case already mentioned above (see Section 1), the Oberlandesgericht Düsseldorf seeks to clarify the fundamental question of how antitrust law enforcement and data protection rules interact and whether antitrust regulators may play a role in safeguarding data protection law as part of antitrust proceedings.
This case started from a 2019 decision of the German federal antitrust authority (Bundeskartellamt) against Facebook. The authority found a breach of German competition law with regard to abuse of market dominance by also relying on GDPR provisions in its assessment. These findings primarily concerned rules around valid consent for combining personal data across several services of the social media company. One of the measures imposed by the authority was a prohibition to collect user and device related data obtained from the use of its affiliated services, as well as from visits to third-party websites or apps without valid consent from users.
Facebook appealed the decision before German courts, with the court of appeal (Oberlandesgericht Düsseldorf) expressing doubts on the legality of the decision of the antitrust regulator, and deciding to suspend its effects as an interim measure until the matter is decided on substance. In return, the German Federal Court of Justice’s antitrust division overturned this interim measure of the court of appeal, and decided that the prohibition ordered by Bundeskartellamt can be enforced while judicial proceedings are ongoing, before sending the case back to Düsseldorf to be decided on substance.
The court in Düsseldorf suspended proceedings and asked the CJEU to clarify a number of essential questions (see also Section 1 above). In this context, can the Bundeskartellamt determine a GDPR breach by the company investigated in antitrust proceedings and order its correction, given that the regulator is not a supervisory authority under the Regulation, let alone the lead one? The referring Court noted that the Irish Data Protection Commissioner – as the lead DPA of the company – was already investigating alleged GDPR breaches relevant for this case.
4. Judicial redress: Can competitors engage in representative actions? And do “worries” and “fears” count as non-material damages?
An interesting question posed by the Austrian Supreme Court of Justice in December 2020 relates to whether persons other than harmed data subjects may initiate judicial proceedings for GDPR breaches against the infringer. The Austrian court wishes to know if Article 80(2) GDPR allows competitors, associations, entities and Chambers to sue, regardless of invoking specific data subjects’ rights infringements and the latter’s mandate, in cases where such bodies are entitled to initiate proceedings under national consumer law.
On such matters, the literature argues that Article 80(2) leaves it up to Member-States to determine whether non-profits with public interest statutory objectives and which are active in the defense of data subjects’ rights may bring own-initiative proceedings in their territory. Thus, it will be particularly interesting to see how the CJEU views the ability of competitors to sue other companies in putative defense of data subjects’ collective interests, notably in the absence of alleged infringements of individuals’ rights.
In May 2021, the Oberster Gerichtshof (Austria) asked important questions to the CJEU related to non-material damages under the GDPR: can courts attribute compensation to data subjects where a GDPR provision has been infringed, but the data subjects have not suffered harm? And, if demonstrating harm is necessary, does Article 82 GDPR require data subjects’ non-material damages to go beyond the mere nuisance or discomfort caused by the infringement?
Just a month later, the Bulgarian Supreme Administrative Court went further and asked whether data subjects’ worries, fears and anxieties caused by a confidentiality breach involving personal data qualify as non-material damages which entitle them to compensation, even where data misuse by third parties has not been established and/or data subjects have not suffered any further harm.
According to its 2020 Annual Report, the average length of proceedings at the CJEU was 15.4 months in the past year. Therefore, it will take a while before the Court clarifies the questions summarized above – and it should be expected that for the very complex ones that raise novel issues, like the interaction between antitrust and data protection law, the proceedings will be longer than average. This overview of questions for preliminary rulings in any case indicates that while there are many GDPR provisions that need clarification, some of the most intricate issues raised by complex personal data processing and how data protection law applies to them have now reached the top court in the EU.
Table 1 — Pending data protection questions sent to the CJEU
Article 5(1)(a) and (e) GDPR; Article 6(1)(f) GDPR; Article 17 GDPR; Article 40 GDPR; Articles 77 and 78 GDPR.
Table 1 — Pending data protection questions sent to the CJEU
Joint Project to Explore Limits of Consent in Asia-Pacific Data Privacy Regimes
The Future of Privacy Forum (FPF), a non-profit organization that serves as a catalyst for privacy leadership and scholarship, has partnered with the Asian Business Law Institute (ABLI), a subsidiary of the Singapore Academy of Law (SAL). With this partnership, FPF Asia Pacific and Singapore’s top legal think tank join forces to offer a unique cooperation platform to support the convergence of data protection regulations and best privacy practices in the region.
It is envisioned that this collaboration will result in research, publications, and events. The agreement will build on the substantial work already done by the two think tanks in this area. FPF recently launched an Asia-Pacific office, which aims to advise stakeholders on emerging privacy laws and frameworks in the region.
The first joint activity of this partnership is an online seminar co-hosted by the Personal Data Protection Commission (PDPC) of Singapore. The event titled “Exploring Trends: From ‘Consent-Centric’ Frameworks to Responsible Data Practices and Privacy Accountability in Asia Pacific” takes place on September 16 from 2-4PM SGT during Singapore’s Personal Data Protection Week. The virtual panel will highlight the limits of the consent-based approach to data protection as it has developed in the region and globally, and the usefulness of developing alternatives, like GDPR-inspired “legitimate interests.” Opening remarks will be made by Yeong Zee Kin, PDPC Deputy Commissioner, with closing remarks by Raymund Liboro, National Commissioner of the Philippines and co-chair of the ASEAN Data Privacy Forum. The panels consist of top data protection and privacy experts in Asian Pacific countries, along with FPF Senior Fellow for India Malavika Raghavan, and FPF Managing Director for Europe Rob Van Eijk.
“In Asia-Pacific as elsewhere, obtaining the user’s consent has long been considered the basis of any regulatory and compliance approach to data protection,” said new Manager of Asia-Pacific, Dr. Clarisse Girot. “Today, this approach has been called into question, and an increasing number of regulators and privacy professionals are promoting accountability over a consent-centric approach to data protection. However, the fragmentation of data protection laws in Asia Pacific is an obstacle to the development of common regional solutions.”
This theme will be one of FPF Asia Pacific’s priorities for the remainder of 2021, in close coordination with the regulators in the region. FPF and ABLI will also release a publication in the coming months that will provide a comparative analysis on the requirements for consent in Asia-Pacific data privacy laws, including recommendations for consistent implementation across the region.
The publication will include insights from highly respected data protection experts across 14 jurisdictions in the Asia-Pacific region. The aim is to propose solutions which are neutral and adapted to the context of each of the jurisdictions covered by this project.
“There are synergies between FPF and ABLI’s work on data privacy,” said Rama Tiwari, Chief Executive at SAL. “This focus on data privacy is especially critical as responses to the COVID-19 situation rely massively on data use and data flows.”
For more information on the Asian Business Law Institute and the Singapore Academy of Law, please visit: https://sal.org.sg/
The Future of Privacy Forum (FPF) and SAE’s Mobility Data Collaborative (MDC) have created a transportation-tailored privacy assessment that provides practical and operational guidance to organizations that share mobility data, such as data from the use of ride-hailing services, e-scooters, or bike-sharing programs. The Mobility Data Sharing Assessment (MDSA) will help organizations assess and reduce privacy risks in their data-sharing processes.
New mobility options are being rapidly adopted in many cities and there is a need to share data so that cities can manage the public right-of-way and for companies to offer or improve services and products.
“These are practical resources to support mobility data sharing between organizations in both the public and private sectors,” said Chelsey Colbert, Policy Counsel at FPF. “The tool is interoperable with leading industry frameworks, and it is technology-neutral so it may be used for any data sharing methods related to ride-hail and micromobility.”
The goal of the MDSA is to enable responsible data sharing that protects individual privacy, respects community interests and equities, and encourages transparency to the public. By equipping organizations with an open-source, interoperable, customizable, and voluntary framework with guidance, the barriers to sharing mobility data will be reduced.
The Mobility Data Sharing Assessment consists of the following components:
A Tool that provides a practical, customizable, and open-source assessment for organizations sharing mobility data.
An Operator’s Manual that provides detailed instructions, guidance, and additional resources to assist organizations as they utilize the tool.
An Infographicthat provides a visual overview of the MDSA process.
“If data from mobility initiatives are going to be used to solve today’s complex mobility challenges, organizations need to be able to conduct thoughtful, in-depth legal and policy reviews,” said Pooja Chaudhari, Head of New Mobility at SAE International and Director of the MDC. “The MDSA can be used by both data providers and recipients to drive innovation in the ever-evolving mobility landscape while ensuring user privacy.”
Learn more about the Mobility Data Collaborative on its website.
Chelsey Colbert is Policy Counsel at the Future of Privacy Forum. Chelsey leads FPF’s portfolio on mobility and location data, which includes connected and automated vehicles, ride-sharing, micro-mobility, drones, delivery robots, and mobility data sharing.
Kelsey Finch, CIPP/US, is Senior Counsel at the Future of Privacy Forum and represents FPF from Seattle, WA. Kelsey leads FPF’s projects on smart cities and communities, data de-identification, ethical data-sharing and research, and other select projects, and serves as an expert and thought leader across the country through speaking engagements, media interviews, and interaction with local, state, and federal regulators and strategic partners.