The First Japan Privacy Symposium: G7 DPAs discussed their approach to reign in AI, and other regulatory priorities
The Future of Privacy Forum and S&K Brussels hosted the first Japan Privacy Symposium in Tokyo, on June 22, 2023, following the G7 Data Protection and Privacy Commissioners roundtable. The Symposium brought global thought leadership on the interaction of data protection and privacy law with AI, as well as insights into the current regulatory priorities of the G7 Data Protection Authorities (DPAs) to an audience of more than 250 in-house privacy leaders, lawyers, consultants and journalists from Japan and the region.
The program started with a keynote address from Commissioner Shuhei Ohshima (Japan’s Personal Information Protection Commission), who shared details about the results of the G7 DPAs Roundtable from the day before. Two panels followed, featuring Rebecca Kelly Slaughter (Commissioner, U.S. Federal Trade Commission), Wojciech Wiewiórowski (European Data Protection Supervisor, EU), Philippe Dufresne (Federal Privacy Commissioner, Canada), Ginevra Cerrina Feroni (Vice President of the Garante, Italy), John Edwards (Information Commissioner, UK), and Bertrand du Marais (Commissioner, CNIL, France). Jules Polonetsky, FPF CEO, and Takeshige Sugimoto, Managing Partner at S&K Brussels and FPF Senior Fellow, hosted the Symposium.
The G7 DPA Agenda, built on three pillars: Data Free Flow with Trust, emerging technologies, and enforcement cooperation
The DPAs of the G7 nations started to meet annually in 2020, following the initiative of the UK’s Information Commissioner Office during UK’s G7 Presidency that year. This is a new venue for international cooperation of DPAs, limited to Commissioners from Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and the European Union. Throughout the year, the DPAs maintain a permanent channel of communication and implement a work plan adopted during their annual Roundtable.
In his keynote at the Japan Privacy Symposium, Commissioner Shuhei Oshshima laid out the results of this year’s Roundtable, held in Tokyo on June 20 and 21. The Commissioner highlighted three pillars guiding the group’s cooperation this year: (I) Data Free Flow with Trust (DFFT), (II) emerging technologies, and (III) enforcement cooperation.
The G7 Commissioners’ Communique expressed overall support for the DFFT political initiative, welcoming the reference to DPAs as stakeholders in the future Institutional Arrangement for Partnership (IAP), a new structure the G7 Digital Ministers announced earlier in April to operationalize the DFFT. However, in the Communique, the G7 DPAs emphasized that they “must have a key role in contributing on topics that are within their competence in this Arrangement.” It is noteworthy that, among their competencies, most G7 DPAs have the authority to order the cessation of data transfers across borders if legal requirements are not met (see, for instance, this case from the CNIL – the French DPA, this case from the European Data Protection Supervisor, or this case from the Italian Garante).
The IAP seems to provide a key role for governments themselves currently, in addition to stakeholders and “the broader multidisciplinary community of data governance experts from different backgrounds,” according to Annex I of the Ministerial Declaration announcing the Partnership. The DPAs are singled out only as an example of such experts.
In the Action Plan adopted in Tokyo, the G7 DPAs included clues as to how they see the operationalization of DFFT playing out: through interoperability and convergence of existing transfer tools. As such, they endeavor to “share knowledge on tools for secure and trustworthy transfers, notably through the comparison of Global Cross-Border Privacy Rules (CBPR) and EU certification requirements, and through the comparison of existing model contractual clauses.” (In an analysis touching broadly beyond the G7 jurisdictions, the Future of Privacy Forum published a report earlier this year emphasizing many commonalities, but also some divergence, among three sets of model contractual clauses proposed by the EU, the Iberoamerican Network of DPAs, and ASEAN).
Arguably, though, DFFT was not the main point on the G7 DPAs agenda. They had adopted a separate and detailed Statement on generative AI. In his keynote, Commissioner Shuhei Ohshima remarked that “generative AI adoption has increased significantly.” In order to promote trustworthy deployment and use of the new technology “the importance of DPAs is increasing also on a daily basis,” the Commissioner added.
Generative AI is not being deployed in a legislative void, and data protection law is the immediately applicable legal framework
Top of mind for G7 data protection and privacy regulators is AI, and generative AI in particular. “AI is not a law-free zone,” said FTC Commissioner Slaughter during her panel at the Symposium, being very clear that “existing laws on the books in the US and other jurisdictions apply to AI, just like they apply to adtech, [and] social media.” This is apparent across the G7 jurisdictions: in March, the Italian DPA issued an order against OpenAI to stop processing personal data of users in Italy following concerns that ChatGPT breached the General Data Protection Regulation (GDPR); in May, the Canadian Federal Privacy Commissioner opened an investigation into ChatGPT jointly with provincial privacy authorities; and, in June, Japan’s PIPC issued an administrative letter warning OpenAI that it needs to comply with requirements from the Act on the Protection of Personal Information, particularly regarding the processing of sensitive data.
At the Japan Privacy Symposium, Ginevra Cerrina Feroni, VP of the Garante, shared the key concerns guiding the agency’s enforcement action against OpenAI, which was the first such action in the world. She highlighted several risks, including a lack of transparency about how OpenAI collects and processes personal data to deliver the ChatGPT service; uncertainty regarding a lawful ground for processing personal data, as required by the GDPR; a lack of avenues to comply with the rights of data subjects, such as access, erasure, and correction; and, finally, the potential exposure of minors to inappropriate content, due to inadequate age gating.
After engaging in a constructive dialogue with OpenAI the Garante suspended the order, seeing improvements in previously flagged aspects. “OpenAI published a privacy notice to users worldwide to inform them how personal data is used in algorithmic training, and emphasized the right to object to such processing,” the Garante Vice President explained. She continued, noting that OpenAI “provided users with the right to reject their personal data being used for training the algorithms while using the service, in a dedicated way that is more easily accessible. They also enabled the ability of users to request deletion of inaccurate information, because – and this is important – they say they are technically unable to correct errors.” However, Vice President Cerrina Feroni mentioned that the investigation is ongoing and that the European Data Protection Board is currently coordinating actions among EU DPAs on this matter.
The EDPS added that purpose limitation is among his chief concerns with services like ChatGPT, and generative AI more broadly. “Generative AI is meant to advance communication with human beings, but it does not provide fact-finding or fact-checking. We should not expect this as a top feature of Large Language Models. These programs are not an encyclopedia; they are just meant to be fluent, hence the rise of possibilities for them to hallucinate,” Supervisor Wiewiorowski said.
Canadian Privacy Commissioner Philippe Dufresne emphasized that how we relate to generative AI from a privacy regulatory perspective “is an international issue.” Commissioner Dufresne also added, “a point worth repeating is that privacy must be treated as a fundamental right.” This is important, as “when we talk about privacy as a fundamental right, we point out how privacy is essential to other fundamental human rights within a democracy, like freedom of expression and all other rights. If we look at privacy like that, we must see that by protecting privacy, we are protecting all these other rights. Insofar as AI touches on these, I do see privacy being at the core of all of it,” Commissioner Dufresne concluded.
The G7 DPAs’ Statement on Generative AI outlines their key concerns, such as lack of legal authority to process personal data at all stages
In the aforementioned Generative AI Statement, the G7 data protection regulators laid out their main concerns in relation to how personal data is processed through this emerging type of computer program and service. First and foremost, the commissioners are concerned that processing of personal data lacks legal authority during all three relevant stages of developing and deploying generative AI systems: for the data sets used to train, validate and test generative AI models; for processing personal data resulting from the interactions of individuals with generative AI tools during their use; and, for the content that is generated by generative AI tools.
The commissioners also highlighted the need for security safeguards to protect against threats and attacks that seek to invert generative AI models, and that would technically prevent extractions or reproductions of personal data originally processed in datasets used to train the models. They also advocated for mitigation and monitoring measures to ensure personal data created by generative AI is accurate, complete, and up-to-date, as well as free from discriminatory, unlawful, or otherwise unjustifiable effects.
It is clear that data protection and privacy commissioners are proactive about ensuring generative AI systems are compatible with privacy and data protection laws. Only two weeks after their roundtable in Tokyo, it was reported that the US FTC initiated an investigation against OpenAI. And this proactive approach is intentional. As UK’s Information Commissioner, John Edwards, made clear, the commissioners are “keen to ensure” that they “do not miss this essential moment in the development of this new technology in a way that [they] missed the moment of building the business models underpinning social media and online advertising.” “We are here and watching,” he said.
Regardless of the adoption of new AI-focused laws, DPAs would remain central to AI governance
The Commissioners also discussed the wave of legislative initiatives targeting AI in their jurisdictions. AI systems are not built and deployed in a legislative void: data protection law is largely and immediately relevant, as is consumer protection law, product liability rules, and intellectual property law. In this environment, what is the added value of specific, targeted legislation addressing AI?
Addressing the EU AI Act proposal, European Data Protection Supervisor Wiewiórowski noted that the EU’s initiation of the legislation is not because the legislator thought there was a vacuum. “We saw that there were topics to be addressed more specifically for AI systems. There was a question whether we approach it as a product, service, or some kind of new phenomenon as far as legislation is concerned,” he added. As for the role of the DPAs once the AI Act will be adopted, he brought up the fact that in the EU, data protection is a fundamental right: which means that all legislation or policy solutions governing processing of personal data in a way or another must be looked at through this lens. As supervisory authorities tasked with guaranteeing this fundamental right, DPAs will continue playing a role.
The framework ensuring the enforcement of the AI Act is still under debate, as EU Member States are tasked with designating competent national authorities, and the European Parliament hopes to create a supranational collaborative body to play a role in enforcement. However, one thing is certain: in the proposal, the EDPS has been designated the competent authority to ensure that EU agencies and bodies comply with the EU AI Act.
The CNIL seems to be eyeing the designation as EU AI Act enforcer as well. Commissioner du Marais pointed out that “since 1978, the French Act on IT and Freedom has banned automated decisions. We have a fairly long and established body of case law.” Earlier this year, the CNIL created a dedicated department including data and computer scientists among staff to monitor how AI systems comply with legal obligations stemming from data protection law. “To be frank, we don’t know yet what will come out of the legislative process, but we have started to prepare ourselves. We have also been designated by domestic law as supervisory and certification authority for AI during the 2024 Olympic Games.”
The Garante has a long track record of enforcing data protection law on algorithmic systems and decision-making that impacted the rights of individuals. “The role of the Garante in safeguarding digital rights has always been prominent, even when the issue was not yet widely recognized by the public,” said Vice President Cerrina Feroni. Indeed, as shown by extensive research published last year by the Future of Privacy Forum, European DPAs have long been enforcing data protection law in cases where automated decision-making was central. The Garante led impactful investigations against several gig economy apps and their algorithms’ impacts on people.
Canada is also in the midst of legislating AI, introducing a bill last year that is currently under debate. “There is similarity with the European proposal, but [the Canadian bill] focuses more on high impact AI systems and on preventing harms and biased outputs and decision-making. It provides significant financial fines,” Commissioner Dufresne explained. As part of the bill, enforcement is currently assigned to the relevant ministry in the Canadian government. The Privacy Commissioner explained that the regulatory activity would be coordinated with his office, but also with the competition, media, and human rights regulators in Canada. When contributing recommendations during the legislative process, Commissioner Dufresne noted that he suggested “privacy to be a key principle.” In light of his vision that privacy as a fundamental right is essential for the realization of other fundamental rights, the Commissioner had a clear message that “the DPAs need to be front and center” of the future of AI governance.
UK Commissioner Edwards echoed the value of entrenched collaboration among digital regulators, adding that the UK already has an official “Digital Regulators Cooperation Forum,” established with its own staff. The entity “is important to provide a coherent regulatory framework,” he said.
Children’s privacy is a top priority across borders, with new regulatory approaches showing promising results
One of the key concerns that the G7 DPAs have in relation to generative AI is how the new services are dealing with children’s privacy. In fact, the regulators have made it one of their top priorities to broadly pursue the protection of children’s privacy when regulating social media services, targeted advertising, or online gaming, among others.
Building on a series of recent high-profile cases brought by the FTC in this space, Commissioner Slaughter couldn’t have been clearer: “Kids are a huge priority issue for the FTC.” She reminded the audience that COPPA (Children’s Online Privacy Protection Act) has been around for more than two decades, and it is one of the strongest federal privacy laws in the US: “The FTC is committed to enforcing it aggressively.” Commissioner Slaughter explained that the FTC’s actions, such as their recent case against Epic Games, include considerations related to teenagers as well, even if they are not technically covered by COPPA protections, but are covered by the “unfair practices” doctrine of the FTC.
UK Commissioner John Edwards gave a detailed account of the impact of the UK’s Age Appropriate Design Code in the design of online services provided to children, which was launched by his office in 2020. “We have seen genuine changes, including privacy settings being automatically set to very high for children. We have seen children and parents and carers being given more control over privacy settings. And we have seen that children are no longer nudged to lower privacy settings, with clearer tools and steps in place for them to exercise their data protection rights. We have also seen ads blocked for children,” Commissioner Edwards said, pointing out that these are significant improvements for the online experience of children. These results have been obtained primarily through a collaborative approach with the service providers, who have implemented changes after their services were subject to audits conducted by the regulator.
Children’s and teenagers’ privacy is also top of mind for the CNIL. Among a series of guidance, recommendations, and actions, the French regulator is adding another layer to its approach – digital education. “We have made education a strategic priority. We have a partnership with the Ministry of Education and we have available a platform to certify digital skills for children, as well as with resources for kids and parents,” Commissioner du Marais said. Regarding regulatory priorities, he emphasized attention to age verification tools. Among the principles the French regulator favors for age verification are no direct collection of identity documents, no age estimates based on web browsing history, and no processing of biometric data to recognize an individual. The CNIL has asked websites not to carry out age verification themselves, and to instead rely on third-party solutions.
The discussions of the G7 DPA Commissioners who participated in the first edition of the Japan Privacy Symposium laid out a vibrant and complex regulatory landscape, centered around new challenges posed to societal values and rights of individuals by AI technology, but also making advancements in perennial topics like cross-border data transfers and children’s privacy. More meaningful and deeper enforcement cooperation is to be expected among the G7 Commissioners, whose Action Plan espoused their commitment to move towards constant exchanges related to enforcement actions and to revitalize existing global enforcement cooperation networks, like GPEN (Global Privacy Enforcement Network). Next year, the G7 DPA Commissioners will meet in Rome.
Editor: Alexander Thompson