FPF and Singapore PDPC Event: “Data Sovereignty, Data Transfers and Data Protection – Impact on AI and Immersive Tech”
On July 21, the Future of Privacy Forum (FPF) and Singapore’s Personal Data Protection Commission (PDPC) co-hosted a workshop as part of Singapore’s Personal Data Protection Week, titled “Data Sovereignty, Data Transfers and Data Protection – Impact on AI and Immersive Tech” at Marina Bay Sands Expo and Convention Center in Singapore.
The event focused on international data transfers and their importance to new and emerging technologies.
FPF moderated two panel discussions, bringing together experts and thought leaders from academia, government, industry, and law:
- The first panel, titled “Data Localization vs International Data Transfers,” compared and analyzed the different international data transfer frameworks and data localization requirements that exist around the world today. This panel was moderated by Dr. Gabriela Zanfir-Fortuna (Vice President for Global Privacy, FPF), and attended by panelists Yeong Zee Kin (Deputy Commissioner, PDPC), Lara Kehoe Hoffman (Vice President, Data Privacy and Security (Legal) and Global Data Protection Officer, Netflix), Takeshige Sugimoto (Managing Director and Partner, S&K Brussels LLP; Director, Japan DPO Association), Tobias Judin (Head of International Office, Norwegian Data Protection Authority), and David Hoffman (Steed Family Professor of the Practice of Cybersecurity Policy, Duke University).
- The second panel, titled “Old Challenges of New Technologies – AI and Immersive Tech,” explored how the landscape of international data transfer laws and regulations impacts artificial intelligence (AI) and immersive technology, including augmented reality (AR), virtual reality (VR), and the “metaverse.” This panel was moderated by Josh Lee (Director, FPF APAC) and attended by panelists Raina Yeung (Head of Privacy and Data Policy Engagement, Meta), Simon Chesterman(Dean, Faculty of Law, National University of Singapore), Marcus-Bartley Johns (Asia Regional Director, Government Affairs and Public Policy, Microsoft), Eunice Lim (Director for Corporate Affairs, Asia Pacific, Workday), and Jules Polonetsky (Chief Executive Officer, FPF).
This post summarizes the exciting discussions from these two panels and presents the key takeaways.
Panel 1: “Data Localization vs International Data Transfers”
The first panel stressed the need to distinguish data localization measures from transfer obligations, as both have different goals and use different mechanisms to accomplish those aims. Yeong Zee Kin explained that data localization and data transfer obligations are two separate but overlapping issues. From a regulatory perspective, data localization measures either prohibit data flows or enforce local storage and processing, while data transfer obligations allow data to flow in a protected and safe manner. Data localization measures may appear in privacy laws as well as sectoral regulations, and target different types of data (including non-personal data in some circumstances). Data transfer mechanisms also come in many forms such as certification, standard contractual clauses (SCCs), and binding corporate rules (BCRs), each with their own method of ensuring data protection. The range of transfer mechanisms provide solutions that can be tailored for different use cases and scale of data transfers.
Yeong stressed that global stakeholders need to reset the conversation around data flows in a way that respects different cultures and promotes global consensus around key issues like supervisory and law enforcement access to data. This resonated with Tobias Judin who said that the EU can continue to play a strong role to promote consensus around data transfers. He stressed that while countries pass their own rules, there are still options to facilitate data flows and that governments can accomplish data protection goals without passing localization requirements. Judin also highlighted how the EU has created incentives for other countries to adopt privacy laws that make sense in their own legal and cultural contexts. While the standard for adequacy is strict, other countries have been able to pass laws that meet the requirements.
Landscape of Data Localization
Takeshige Sugimoto presented an overview of the scattered landscape of data localization. He stressed that tensions in cross-border data flows have become more global and now involve numerous bilateral, country-to-country cases. For instance, beyond the transatlantic data transfers debate, tensions are emerging in data flows in the context of EU-Russia and EU-China relations. Sugimoto indicated that if European or American regulators restrict data transfers to China, the latter could retaliate in kind. This is becoming a real possibility, as regulators in both the EU and the U.S. have shown willingness to take enforcement actions against Chinese companies and have even begun to promote their own localization requirements.
Sugimoto highlighted how some international developments may help mitigate the risk of fragmentation, including the Global Cross-Border Privacy Rule (CBPR) certification system, but that such developments will not completely alleviate tensions in global data transfer rules. He stressed that on the one hand, if the U.K. loses adequacy from the EU and in turn participates in the CBPR system, the EU may be left behind. On the other hand, even if alternative frameworks become a global standard and mitigate risks of fragmentation, China’s data localization regime will continue to exist and exert influence abroad.
Despite this, Sugimoto indicated that there are positive developments. Beyond the U.S., the EU, and China, other countries are playing a strong role in shaping conversations around data flows. Both Japan and South Korea have demonstrated that it is possible to promote an international standard for data protection while maintaining unique legal systems and cultures.
The panel also explored the perspective of the private sector with respect to data localization and the challenges companies face when responding to such measures.
Cybersecurity and Localization
Data localization also raises security concerns, as organizations and governments rely on information sharing to monitor and respond to security incidents and threat vulnerabilities. As David Hoffman indicated, governments are adopting data localization measures not only for privacy reasons but also for other legitimate government purposes such as promoting law enforcement, ensuring national security, and having enough data available to assist with tax collection.
Hoffman stressed that there is a need for the data protection community to address each of these motivations separately while recognizing and reiterating that privacy and security mutually reinforce each other. Indeed, as Hoffman explained, safeguarding security was one of the primary goals of the 1980 Privacy Guidelines from the Organization for Economic Cooperation and Development (OECD). Security threats can undermine privacy because they increase the risk that personal information will be exposed.
At the same time, cross-border data flows are a core component of how companies and governments address and mitigate such threats through the sharing of threat and attack indicators that often include IP addresses that can fall under the definitions of “personal data”. While collecting and transferring personal data can put privacy at risk, if that use of data substantially increases cybersecurity, it may have a net positive privacy effect. That net positive effect may then be increased with effective use limitations and accountability measures, instead of reliance on collection limitations and/or data localization. Hoffman affirmed that one step towards realizing this involves understanding the rationales and motivations behind data localization and determining other methods to satisfy those government interests while still allowing for the transfer of data that is necessary to promote effective cybersecurity.
Panel 2: “Old Challenges of New Technologies – AI and Immersive Tech”
The second panel focused on the risks and opportunities presented by new and emerging technologies, like AI, AR/VR, and the “metaverse,” which often involve the collection and processing of personal data. Panelists also considered how these technologies could be regulated in the future and how measures to regulate international data transfers may impact the development and deployment of these technologies.
Artificial Intelligence (AI)
Marcus Bartley-Johns explained that AI is not a future possibility but rather a present reality as people regularly interact with AI systems in their professional and personal lives through email, social media, spell checkers, and security and threat protection, among others. Raina Yeung explained that AI is already an essential component in Meta’s system and is used for a wide range of purposes, from polling, to serving advertisements, to taking down misleading and harmful content. She highlighted that AI is an area of strategic importance both for governments and industry as it drives economic development and helps to find solutions to global challenges. Eunice Lim reiterated that AI impacts, and will continue to impact, the way that we live and work. However, she also noted that AI is not meant to replace human workers, but rather to augment us and make life easier for us by taking away repetitive tasks.
Jules Polonestky noted that AI may also present new challenges in terms of deception and discrimination. Polonetsky explained that both the societal data used to train AIs, and how AIs are deployed in practice, can reflect social inequalities and prejudices. Yeung agreed and added that although AI may bring benefits, it also raises the risk of potential harms and therefore must be developed and deployed responsibly. Bartley-Johns stressed that it is important to look at the context of AI deployments as not all applications impact privacy or rights. To illustrate this, Bartley-Johns drew a comparison between AI-based facial recognition systems, which process personal data and could impact data subjects’ privacy and legal rights if used, for example, to deny data subjects access to a service or cause them to be suspected of a crime, and AI-based malware detection systems, which may not process personal data but instead focus only on telemetry from attempts to access devices and systems.
Bartley-Johns explained that a common challenge is viewing responsible AI as a purely technical issue. In his view, implementing responsible AI is a socio-technical challenge: how the technology functions is only the beginning; broader concerns are how humans will interact with, have oversight over, and (where necessary) exercise decision-making power over the AI. Lim explained that the main risk from irresponsible use of AI is loss of trust and called on the public and private sectors to co-create standards and principles for AI. In this respect, Lim highlighted that Workday is working with developers to test and implement procedures for identifying and mitigating instances of AI bias. Yeung shared that Meta’s dedicated and cross-disciplinary Responsible AI (RAI) team builds and tests approaches to help ensure that their machine learning (ML) systems are designed and used responsibly.
Panelists all stressed that regulation has an important role to play in building citizens’ confidence in the technology and setting a baseline for companies’ responsibilities. Bartley-Johns highlighted that the difficulty is in getting the regulation right – ensuring that the technology is available to companies of all sizes and that data is not locked up with a minority of companies. Lim stressed that regulation should be risk-based, identifying the AI use cases which present the highest risks and directing resources to mitigate unintended consequences, and should recognize the different actors in the AI ecosystem, including those who develop AI, and those who deploy AI. Though there is ongoing debate about who is best placed to address these challenges, Polonetsky suggested that privacy professionals could play a role by, for example, undertaking data protection impact assessments, raising issues internally when they arise, and engaging proactively with affected communities to understand their positions and give them a voice. At the same time, Polonetsky also considered that expectations and norms around AI will change over time.
Simon Chesterman explained that conversations around AI regulation tend to assume that new laws would have to be drafted to regulate AI while overlooking the significant challenge that implementing these laws would present in practice. In Chesterman’s view, the central question in regulating AI is not whether to pass new laws but rather, how to apply existing laws to new use cases involving AI. He explained that on a fundamental level, “AI systems” cannot be treated as a discrete regulatory category as they encompass many different technologies and methods. Additionally, Chesterman said it would be a misstep for regulators to grant AI systems legal personality as this may make it easier for humans who misuse AI to avoid liability for their actions. He emphasized that there can always be a human-in-the-loop and that some decisions, such as when to fire a weapon or find a person liable in the judicial system, rightly belong with human decision-makers who have been appointed within a politically accountable framework.
Immersive Technologies and the Metaverse
Yeung explained that the metaverse is the next logical evolution of the internet and social networking platforms, which were initially text-based, but evolved to include photo sharing as mobile telephones became more common, and later, video sharing, as internet speeds increased around the globe. In Yeung’s view, technology – especially videoconferencing during the COVID-19 pandemic – has already done much to bring people together, but the metaverse will revolutionize current 2D online social interaction and enable a more immersive and 3D experience. Yeung also shared the value the metaverse will bring beyond gaming and entertainment, including the significant transformation to education, workforce training, and healthcare, as well as creating economic opportunities for digital creators, small businesses, and brands. Bartley-Johns explained how immersive technologies will bridge the gap between the physical and digital worlds in a range of different contexts, such as creating an “industrial metaverse” combining Internet-of-Things (IoT) devices with “digital twins” and using AR to provide training and technical support remotely.
Chesterman mentioned that improvements in technology over the last decade have raised two major regulatory issues. Firstly, consent no longer makes sense in the context of ubiquitous, large-scale data collection coupled with high-speed computing. Chesterman highlighted Singapore as an example of a jurisdiction that has started to move away from consent towards an alternative, accountability-based model. Secondly, privacy expectations around use of immersive technologies like AR and VR may be different from those that apply to conventional photography in public spaces. Chesterman also added that the metaverse may give rise to disputes over ownership of a person’s visual identity, which may become valuable and require additional protection. Bartley-Johns highlighted additional potential privacy concerns for inferences drawn from data collected in the metaverse, especially in the employment context. He raised the example of if the technology can be used to track employees’ eye movements while their supervisor is talking, and then, that data is used in the employees’ performance assessments. Yeung explained that Meta is focused on a few areas where there are hard questions that do not have easy answers, such as economic opportunity, privacy, safety and integrity, and equity and inclusion. It is critical to get these areas right to realize the potential benefits of the metaverse; as such Meta is investing in research in these areas through partnerships with researchers and academic institutions.
Cross-Border Data Flows
Polonetsky called for deeper dialog on data localization between national leaders, policymakers, and developers of products and services using emerging technologies, highlighting the challenges presented by the spectrum of interests across different stakeholders. Polonetsky stressed that the task for privacy professionals is to present effective and viable alternatives to data localization that enable government and industry to achieve their respective aims. Bartley-Johns concurred with Polonetsky on the need to reframe the conversation around international data flows. Bartley-Johns highlighted that the conversation in APAC has increasingly focused on what legal and technical means exist to assure regulators and data subjects that data will be protected to the same standard as if it had remained in its source jurisdiction when transferred.
Editor: Isabella Perera