Regulatory Strategies of Data Protection Authorities in the Asia-Pacific Region: 2024, and Beyond
The Asia-Pacific (APAC) region has emerged as a dynamic and rapidly evolving landscape for data protection regulation. As digital economies flourish and cross-border data flows intensify, data protection authorities (DPAs) across the region are grappling with complex challenges posed by technological advancements, changing business practices, and evolving societal expectations regarding privacy.
This Report provides a comprehensive analysis of strategy documents and key regulatory actions of the DPAs in 10 jurisdictions, published or developed in 2023 and 2024, setting out regulatory priorities for the following years:
1. Australia
2. China
3. Hong Kong, Special Administrative Region of China (SAR)
The first provides an overview of key trends in the APAC region and identifies priority areas and future initiatives that APAC DPAs indicate that they will focus on in years to come.
The second provides a brief profile of each DPA and summarizes their regulatory actions for the period of 2023-2024, as well as any key strategy documents available.
Our analysis provides insights into how these DPAs have been working towards implementing their strategic priorities throughout 2023 and 2024. To the extent possible, the analysis in this Report is based on official strategy documents – that is, master plans, statements of regulatory priorities, annual reports, and the like – published by these DPAs between 2023-2024, supplemented by an examination of significant regulators actions taken by the DPAs during this period.
While we offer a thorough examination of recent and ongoing initiatives, it is important to note that the data protection landscape is dynamic and rapidly evolving. Therefore, this report not only serves as a retrospective overview but also aims to highlight prospective directions that DPAs may pursue in 2025 and beyond. By highlighting the trajectory of these regulatory bodies, we hope that this Report will aid readers in anticipating potential developments in data protection regulation and enforcement across the region. However, readers should bear in mind that unforeseen technological advancements, geopolitical shifts, or other factors may influence future regulatory approaches in ways that cannot be fully predicted at the time of publication.
The Report recognizes that each jurisdiction faces unique challenges, operates within distinct legal and cultural contexts, and may prioritize different aspects of data protection based on their specific circumstances. The Report is therefore not intended to make value judgments on DPAs, rank them, or evaluate their effectiveness in key areas. Rather, our aim is to identify commonalities and divergences in the DPAs’ priorities and approaches, in order to shed light on key trends in the APAC region. We hope that these insights will prove useful to policymakers, businesses, and data protection privacy professionals as they navigate the APAC region’s complex data protection landscape.
To ensure a comprehensive and accurate understanding of this Report’s scope and methodology, readers should note the following key considerations:
Not all the above DPAs consistently publish official strategy documents. Where a given DPA has not published a strategy document for the period of 2023-2024, the Report’s analysis infers the relevant DPA’s priorities from its regulatory actions.
Not all the above DPAs provide official documents and information in English. Where official English language translations of relevant documents and information are unavailable, we have worked from machine translations.
Our analysis focuses primarily on the DPAs’ strategies and priorities regarding the private sector. While public sector data protection is an important area, it often raises distinct considerations which are beyond the aims of this Report.
Analysis of key strategic documents and recent regulatory actions across the 10 APAC DPAs reveals several common priorities for 2024 and beyond.
Cybersecurity and data breach response emerged as the most widespread priority, with 90% of the DPAs prioritizing efforts to combat cyber threats and enhance organizational readiness for data breaches. This reflects the growing frequency and sophistication of cyber attacks across the region and globally.
Cross-border data transfers were a key priority for 80% of the DPAs, highlighting the increasing importance of facilitating secure international data flows in an interconnected digital economy.
AI governance and regulation was a key focus for 70% of the DPAs, as authorities grappled with the rapid advancement and adoption of AI technologies, particularly generative AI, in recent years.
Regulation of the use of biometric data, including facial recognition technology (FRT), was prioritized by 60% of DPAs, indicating growing concerns about the privacy implications of these technologies.
Finally, 50% of DPAs emphasized the protection of children’s personal data, recognizing the unique needs of young people in digital environments.
Updated FPF Infographic Explores Data in Connected Vehicles
Today, The Future of Privacy Forum is launching the Data and the Connected Vehicle Infographic 2.0, including new updates to account for the types of data associated with connected vehicles, features in and outside of the vehicle, and data handlers who receive and process data. Lawmakers, manufacturers, privacy professionals, and consumers are actively engaged in work to examine and respond to privacy and transparency practices related to personal data collected in and around vehicles. The updated infographic provides a visual representation of where the data flows within the connected vehicle ecosystem.
In 2017, FPF launched the first vehicle infographic, “Data and the Connected Car.” FPF’s continued work on connected vehicles has built upon this initial product, providing additional resources, up to and including the Vehicle Safety Systems Privacy Risks and Recommendations report from March 2024. The report specifically highlights the potential for privacy risks to exist when new technology is incorporated, through requirement or choice, In addition to comments to the Department of Transportation regarding privacy implications for future technology and privacy implications for the future use of AI in transportation.
The updated infographic highlights three specific areas within the connected vehicle ecosystem:
1. Types of Data in the Vehicle include vehicle and safety data, occupant data, location data, account data, and biometric and body-related data. Artificial intelligence is likely to be present in various features and functions throughout the vehicle. Understanding the types and categories of data associated with connected vehicles is essential for regulating data and increasing privacy literacy among individual drivers and passengers. Some data, like operational information or data on engine health, is integral to the vehicle functions, while other types of data can be user-generated and intended for personalization or driver assistance, including GPS navigation and usage of smartphone integration.
2. Features Inside and Outside the Vehicle include technologies such as infotainment systems, event data recorders, and tire sensors. Additional novel technologies may be more commonly incorporated into vehicles in the future. Some of the vehicle technologies may be added after-market by individuals or are specific to a certain vehicle make and model, such as keyless entry, augmented reality displays, or external charging. Certain vehicle features may be governed by specific requirements and rules according to state and federal regulations. In addition, manufacturers are increasingly incorporating certain technologies specifically in response to emerging regulatory requirements. An increase in technology and data collection can increase the privacy risk associated with the vehicle.
3. Data Receivers or Data Handlers are entities who collect and control the flow of data from inside and outside the vehicle for various purposes, including performance and safety. Once the data is collected, its transfer and use can depend on a number of factors, including agreements with the manufacturer, third parties and service providers, emergency services, and external infrastructure such as traffic lights and automatic license plate readers. Manufacturers may receive vehicle and safety data, location data, account data, occupant data, and biometric or body-related data (depending on the technology incorporated into the vehicle). Third parties and service providers may also receive information about the vehicle and potentially about the user. Some third parties in the connected vehicle ecosystem include insurance companies, dealerships and service centers, and entities that provide in-vehicle services through the infotainment system. Notice to individuals should provide information about when data is required for the vehicle to function or for important safety or regulatory requirements.
Individuals should feel physically and digitally safe in their vehicles. In 2023, FPF conducted a survey wherein consumers indicated that transparency is important to trust and adoption of in-vehicle technologies intended to increase safety. This updated infographic can help provide people with transparency by providing a visual demonstration to foster an understanding of how technology is utilized in a vehicle and where personal data may be implicated. Additionally, this infographic can serve as a resource for policymakers who need to understand the ecosystem in order to regulate effectively. As vehicle privacy continues to be top of mind for all individuals, the updated FPF infographic serves to help improve understanding and provide the transparency that is needed for a trusted mobility ecosystem.
Infographic Explores Driver Data Collection and Use in Connected Cars
FPF’s “Data and the Connected Vehicle” Demystifies Connected Car Ecosystem as Policymakers Look to Regulate
SEPT. 16, 2024 — Vehicle technologies are evolving rapidly, in every facet of the system, from safety features to entertainment, and occupant convenience. Many of these new features are enabled by the collection of driver and occupant data – and data collected from their surroundings – for vehicles to function and communicate with service providers, with one another, and with sensors on and around the road. An updated infographic from the Future of Privacy Forum (FPF) provides drivers with an understanding of how their data is collected and used in connected vehicles and how data flows in the connected vehicle ecosystem.
Individuals and policymakers have increasingly called for additional transparency regarding vehicle data and what happens with it. FPF’s updated Data and the Connected Vehicle infographic provides an accessible visual of the critical data flows in today’s connected vehicles and how they collect and use data and AI to operate different systems.
“Most new vehicles have some, if not all, of the features outlined in Data and the Connected Vehicle, from wireless connectivity to cabin monitoring and microphones. To foster a trusted mobility ecosystem, it is vital that data is transferred respectfully and securely between a network of carmakers, vendors, and others to support individuals’ established safety, logistics, and information expectations,” said Adonne Washington, Policy Counsel of Data, Mobility, and Location at FPF and the project lead. “We created this project to demystify the behind-the-scenes of an everyday tool people rely on worldwide.”
A previous FPF survey found that many individuals value advanced vehicle safety technologies, but worry about the privacy risks, accuracy, cost, and data transfers to third parties. FPF’s infographic looks to clear misconceptions and clarify the privacy implications of connected cars and vehicle safety systems. This will be particularly pertinent, as the National Highway Traffic Safety Administration (NHTSA) is establishing new safety technology requirements for vehicle manufacturers and policymakers are looking to establish specific vehicle data policies.
“Data and the Connected Vehicle” updates a 2017 infographic created in response to the evolving landscape of smart and connected vehicles over the last few years.
“Ensuring privacy protections in vehicles is necessary, as is understanding how they work,” Washington continued, “As these systems continue to evolve and adapt to new driver accommodations, transparency will be key to their adoption and building trust between manufacturers, regulators, and consumers.”
Download the new infographic here. In connection with its launch, FPF will host a public webinar on September 18 with privacy leaders from major automotive manufacturers, including Ford, Rivian, and Honda, to discuss how data collection and processing have enabled many new features in connected cars. Learn more and register for the event here.
FPF Unveils Report on Emerging Trends in U.S. State AI Regulation
Today, the Future of Privacy Forum (FPF) launched a new report—U.S. State AI Legislation: A Look at How U.S. State Policymakers Are Approaching Artificial Intelligence Regulation— analyzing recent proposed and enacted legislation in U.S. states. As artificial intelligence (AI) becomes increasingly embedded in daily life and critical sectors like healthcare and employment, state lawmakers have begun crafting regulatory strategies to promote its opportunities while addressing its heightened risks. This report by FPF delves into the trends of these legislative efforts, examines core questions and issues, and offers key considerations for policymakers as they navigate the complexities of AI policy.
The report primarily focuses on ‘Governance of AI in Consequential Decisions,’ a legislative framework most frequently adopted by lawmakers, which applies to a broad range of entities and industries, and offers the most comprehensive approach to mitigating specific AI risks across various proposals and laws. The report also discusses alternative approaches focused on particular technologies, such as generative artificial intelligence and frontier or foundation models.
In this Report, we highlight the following:
State lawmakers are primarily focusing on governing AI used in consequential decisions that significantly impact individuals’ livelihood and life opportunities.
A key goal is to mitigate the risk of algorithmic discrimination, either by prohibiting AI systems with identified discriminatory risks or by establishing a duty of reasonable care to protect consumers from such discrimination.
Most frameworks create role-specific obligations, including separate developer and deployer requirements for transparency, risk assessment, and AI governance programs.
Common consumer rights around AI include rights of notice and explanation, correction, and to appeal or opt-out of automated decisions.
Alternatively, some lawmakers utilize a technology-specific approach to address novel risks posed by generative AI or frontier or foundation models.
This report is based on FPF’s analysis of key bills introduced in 2023 and 2024 (detailed in Supplementary Content), as well as our engagement with state policymakers. It also incorporates insights from civil society groups, businesses, and technical experts, whose diverse perspectives have been crucial in shaping a comprehensive examination of the nuances and challenges in advancing AI regulations.
The emerging trends highlighted in the report point to a collaborative movement toward an interoperable framework, where consistent definitions and principles are important for supporting business compliance, safeguarding individual rights, and ensuring regulatory clarity.
Call for Nominations: 15th Annual Privacy Papers for Policymakers Award
Future of Privacy Forum Award Elevates Privacy Research to Inform Policy Discussion
The award provides privacy and data protection scholars, researchers, and authors in the U.S. and internationally with the opportunity to inject their ideas into the current policy discussion. It elevates and honors important work analyzing current and emerging privacy issues, with the potential to inform real-world policy solutions as the U.S. Congress, federal regulators, and international data protection agencies grapple with privacy issues.
FPF also offers a student paper award to honor work authored by students in undergraduate, graduate, and professional programs. Student submissions must follow the same guidelines as the general PPPM award.
“The accelerating pace of AI has raised complex challenges for policymakers, and scholarship from the privacy and technology academic community is increasingly critical for shaping legislative and compliance solutions,” said Jules Polonetsky, CEO of FPF. “As lawmakers worldwide seek to address urgent data protection issues, FPF’s Privacy Papers for Policymakers publication serves as a critical resource highlighting leading research and expert perspectives.”
We encourage you to share this opportunity with your peers and colleagues. Learn more about the Privacy Papers for Policymakers program and view previous year’s highlights and winning papers on our website.
FPF will invite winning papers focused on U.S. policy to present their work at an annual event in Washington, D.C., in March 2025, with top policymakers and privacy leaders. Winning papers focused on international policy will be invited to showcase their work focused on global policymakers and data protection authorities in a virtual event in March 2025. FPF will also publish a printed digest of the summaries of the winning papers for distribution to policymakers in the United States and abroad.
Learn more and submit finished papers by October 11, 2024. Please note that the deadline for student submissions is the same. You can also learn more about last year’s event here.
Five ways in which the DPDPA could shape the development of AI in India
India enacted the Digital Personal Data Protection Act, 2023 (DPDPA) on August 11, 2023, a comprehensive data protection law culminating from a landmark Supreme Court decision recognizing a constitutional right to privacy in India, and discussions on multiple drafts spanning over half a decade. 1
The law comes at a time when, globally, there has been an exponential growth in artificial intelligence applications and use-cases, including consumer-facing generative AI systems. As a comprehensive data protection law, the DPDPA will significantly impact how organizations use and process personal data, which in turn affects the development and use of AI. Specifically, AI model developers and deployers will need to carefully consider the DPDPA’s regulatory scope concerning the processing of personal data, the limited grounds for processing, the rights of individuals in respect of their personal data, and the possible exemptions available to train and develop AI systems.
While the Central Government has yet to notify subordinate legislation to the DPDPA (the DPDP Rules), which will operationalize key provisions of the law, we can analyze the DPDPA for an early idea of how it could be applied to AI. While the new law may create challenges for AI training and development through its consent-centric regime, it also contains exemptions for publicly available data, exemptions for research, a limited territorial scope, and risk-based approach to the classification of obligations—an overall approach that is likely to significantly shape the development of AI in India.
1.DPDPA’s consent-centric regime may pose challenges for AI training and development
The DPDPA recognises consent and ‘certain legitimate uses’ as the two grounds for processing personal data. Section 7 of the DPDPA specifies scenarios where personal data can be processed without consent. These include situations where the data principal has voluntarily provided their personal data and has not objected to its use for a specific purpose, as well as cases involving natural disasters, medical emergencies, employment-related matters, and the provision of government services and benefits
This means that the DPDPA creates a consent-centric regime for personal data processing. Notably, it does not recognise other alternative legal bases to consent for processing personal data, such as contractual necessity and legitimate interests, that are provided under other leading data protection laws internationally, such as the General Data Protection Regulation (GDPR) in the EU and Brazil’s Lei Geral de Proteção de Dados (LGPD). Previous work by FPF has identified challenges – for both organizations and individuals – in relying on consent as the primary basis for processing, especially in ensuring that it is provided meaningfully. In the context of AI development, FPF’s report on generative AI governance frameworks in the APAC region highlights the challenges of relying on consent for web crawling and scraping (however, this may not be an issue under the DPDPA for publicly available data – see point 2 below). Specifically, without an established legal relationship with the individuals whose data is scraped, it is practically impossible to identify and contact them to obtain their consent.
Certain sector-specific AI applications and generative AI systems that require curated personal data to develop AI models will need to be trained on personal data that is not publicly available. In such a context, data fiduciaries (i.e., “data controllers” or entities that determine the purposes and means of processing personal data) will likely need to rely on consent as the primary ground for processing personal data. As per the DPDPA, data fiduciaries — in this case, AI developers or deployers — must ensure that consent is accompanied by a notice clearly outlining the personal data being sought, the purpose of processing, and the rights available to the data principal. Furthermore, for personal data collected before the enactment of the DPDPA, data fiduciaries are required to provide notice informing the “data principal” (i.e., data subject, or the person whose personal data are collected or otherwise processed).
2. Exemptions for publicly available data could facilitate training AI models on scraped data, but require caution
A significant provision under the DPDPA is the exclusion of publicly available data entirely from the scope of regulation. According to Section 3(c)(ii) of the DPDPA, the DPDPA does not apply to data that is made publicly available by the “data principal” or any other person legally obligated to make the data publicly available.
This blanket exemption goes further than similar provisions in other data protection laws, which, for instance, only exempt organizations from the obligation to obtain individuals’ consent for processing of their personal data, if the data is publicly available. This is the case in Singapore, whereSection 13 of the Personal Data Protection Act (PDPA), read with the Act’s First Schedule, exempts organizations from the requirement to obtain consent to process personal data, if the data is publicly available. However, unlike the DPDPA, data protection obligations under PDPA continue to apply even when processing publicly available data.
Similarly, Article 13 of China’sPersonal Information Protection Law (PIPL), which, broadly, specifies the grounds for processing personal data, allows the processing of personal data without consent if the data has been disclosed by the individual concerned or has been lawfully disclosed. Such processing must be within reasonable scope and must balance the rights and interests of the individual and the larger public interest.
In Canada, the relevant exemption under the Personal Information Protection and Electronic Documents Act (PIPEDA) only applies to the processing of publicly available information in the circumstances mentioned in the Regulations Specifying Publicly Available Information, SOR/2001-7 (13 December, 2000). The Canadian data protection regulator provides guidance on the interpretation of what could be considered as publicly available.
Of note, the EU’s GDPR does not include any exemptions or even tailored rules applying to publicly available personal data. This is because the whole regulation applies equally to all personal data, including the provisions related to lawful grounds for processing. For instance, with regard to giving notice to data subjects, the GDPR even has a dedicated article that requires notice to be given when personal data was not collected directly from data subjects (Article 14). However, this obligation has an exception where “the provision of such information proves impossible or would involve a disproportionate effort, in particular for processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes”. There is currently an ongoing debate among European regulators on whether processing publicly available personal data particularly under the guise of scraping can be done lawfully without the consent of individuals under the GDPR, with no clear answer yet.2
Globally, the scraping of webpages has come under increased regulatory scrutiny. In August 2023, members of the Global Privacy Assembly’s International Enforcement Cooperation Working Group issued a joint statement urging social media companies and other websites to guard against unlawful scraping of personal information from web pages. In May 2024, the European Union Data Protection Board’s ChatGPT Taskforce, in its report, noted that the automated collection and extraction of certain information from webpages might contain personal data, including sensitive categories of personal data, which could “carry peculiar risks for the fundamental rights and freedoms” of individuals.
Processing of publicly available personal data would not be subject to obligations under the DPDPA to the extent that any personal data contained in the datasets was made publicly available by the data principal or by someone legally required to do so – this may include, for example, personal data from social media platforms and company directories. However, organizations will still need to incorporate appropriate safeguards to ensure that only permissible personal data is scraped and the scraped data does not violate any other applicable laws. At the same time, questions may arise with regard to the applicability of the DPDPA to publicly available personal data that was collected for an initial processing operation, such as training an AI model, but which is not anymore publicly available after being collected.
3.Exemptions for research purposes with clear technical and ethical standards could promote AI research and development
Section 17(2)(b) of the DPDPA also exempts processing of personal data for “research, archiving or statistical purposes” from obligations under the DPDPA. However, this exemption only applies if such processing complies with standards prescribed by the Central Government and is not done to take “any decision specific to a [d]ata [p]rincipal”. To date, the Central Government has not released any standards relating to this provision
By contrast, data protection laws in most jurisdictions do not specifically provide an exemption for processing personal data for research purposes. Instead, they recognize research as a secondary use that does not require a distinct lawful basis for processing than the one originally relied on, or permit non-consensual processing for research, subject to certain conditions.
For instance, in the EU, under the GDPR, secondary use of personal data for archiving, statistical, or scientific research purposes is permissible, provided that ‘appropriate safeguards’ are in place to protect the rights of the data subject. These safeguards include technical and organizational measures aimed at ensuring data minimization. Furthermore, the GDPR allows the processing of sensitive categories of personal data when necessary for scientific or historical research purposes.
In Japan, the Act on the Protection of Personal Information (APPI) exempts consent requirements, in cases of secondary collection and use of personal data, if the data is obtained from an academic research institution and processed jointly with that institution. However, such processing must not be solely for commercial purposes and must not infringe upon the individual’s rights and interests.
In Singapore, the PDPA provides a limited additional basis for the use, collection, and disclosure of personal data for research purposes, if the organization can satisfy the following conditions: (a) the research purpose requires personally identifiable information; (b) there is a clear public benefit to the research; (c) the research results will not be used to make decisions affecting individuals; and (d) the published results do not identify individuals.
It is unclear at this stage if the research exemption under the DPDPA will extend to only academic institutions or also extend to private entities that engage in research. While such an exemption, with clearly outlined standards, could help create quality data sets for model development, it is crucial to have clearly defined technical and ethical standards that can prevent privacy harms.
4.Limited nature of DPDPA’s territorial scope may allow offshore providers of AI systems to engage in unregulated processing of personal data of data principals in India
Like many other global data protection frameworks, the DPDPA has extraterritorial applicability. Section 3(b) of the DPDPA indicates that the DPDPA applies to entities that process personal data outside India, if such processing is connected to any activity which is related to the offering of “goods or services” to data principals in India.
This provision is narrower in scope than similar provisions under other global data protection laws. For example, the GDPR, unlike the DPDPA, also applies extraterritorially to processing which involves ”the monitoring of behavio(u)r” of data subjects within the European Union. In fact, data protection authorities in Europe have fined foreign entities for unlawfully processing the personal data of EU residents, even when those entities have no presence in the region. Of note, under the EU’s AI Act, AI systems used in high-risk use cases3 “should be considered to pose significant risks of harm to the health, safety or fundamental rights if the AI system implies profiling” as defined by the GDPR (Recital 53), linking thus engaging in “profiling” as a component of an AI system to heightened risks to the rights of individuals. Interestingly, the Personal Data Protection Bill, 2019, which was introduced in Indian Parliament and withdrawn in 2022, and the Joint Parliamentary Committee’s version of the data protection bill also extended extraterritorial applicability to any processing that involved the “profiling of data principals within the territory of India”.
This narrower scope permits offshore providers of AI systems, which do not provide goods and services to data principals in India, to profile and monitor the behavior of data principals in India without being subject to any obligations following from the DPDPA. Additionally, such companies may engage in unregulated scraping of publicly available data to train their AI systems, beyond the exception explored above. As highlighted in point 2, publicly available personal data that has not been made available by the data principal or by any other person under a legal obligation still falls under the DPDPA’s scope of regulation. This could include personal data shared by others on blog pages, social media websites, or in public directories, among others. Compliance with the DPDPA obligations in these scenarios does not extend to offshore organizations, as long as they do not engage in activities related to offering goods or services in India.
For the same types of data, all other data fiduciaries must ensure that the data is processed based on permissible grounds and is protected by appropriate security safeguards. Additionally, for personal data collected through consent, data fiduciaries must ensure that data principals are afforded the rights to access, correct, or erase their personal data held by the fiduciary.
5. Classification of significant data fiduciaries with objective criteria would allow a balanced and risk-based approach to data protection obligations relevant to AI systems
The DPDPA adopts a risk-based approach to imposing obligations by introducing a category of data fiduciaries known as ‘Significant Data Fiduciaries’ (SDFs). The DPDPA empowers the Central Government to designate any data fiduciary or class of data fiduciaries as a SDF based on the following factors:
The volume and sensitivity of personal data processed;
The risk posed to the rights of data principals);
The potential impact on the sovereignty and integrity of India;
Risk to electoral democracy;
Security of the state; and
Public order.
In addition to complying with the obligations for data fiduciaries, SDFs are required to:
appoint a resident Data Protection Officer who will serve as the primary point of contact for grievance resolution under the mandatory grievance redressal mechanism;
designate an independent data auditor to conduct regular audits, ensure compliance with data protection obligations, and carry out periodic Data Protection Impact Assessments (DPIA).
The DPIA obligation is particularly relevant to identifying and mitigating risks to privacy and other rights that may be impacted by processing of personal data in the context of training or deploying an AI system.
The Central Government also has the powers to impose additional obligations on SDFs. On the other hand, the Central Government is also empowered to remove notice, data retention limitation, accuracy and obligations for certain data fiduciaries or a class of data fiduciaries, “including startups”.
It is important to note that the DPDPA does not specify objective criteria, such as the categories of personal data that may be considered sensitive, or the volume of data or users required for the classification of SDFs or the easing of certain obligations for data fiduciaries. In the absence of these specific quantitative thresholds, the classification of AI driven companies could be influenced by the Central Government’s perception of the potential threats posed by specific AI applications.
Conclusion
With the AI market in India growing at 25-35% annually and projected to reach a market size of around $17 billion by 2027, the Indian government has recognized this opportunity by allocating over $1.2 billion for the IndiaAI Mission, aimed at developing domestic capabilities to boost the growth of AI in the country. As AI continues to evolve and integrate into various sectors, the DPDPA provides a crucial framework that will influence how organizations develop and deploy AI technologies in India. The law’s exemptions for publicly available data, its over-reliance on consent, and a graded approach to obligations for data fiduciaries present both opportunities and challenges.
The provisions of the DPDPA will only take effect once the government issues a notification under Section 1(2) of the DPDPA. The forthcoming DPDP Rules are expected to clarify and operationalize key aspects of the Act. These include the form and manner of providing notices, breach notification procedures, how data principals can exercise their rights under the DPDPA, and the provisions on procedure and operations of the Data Protection Board. The effectiveness of the law in balancing privacy protections, preventing harms, on one hand, and harnessing the benefits that AI could bring for people and society, on the other hand, will become clearer once these rules are in place.
Edited by: Gabriela Zanfir-Fortuna, Josh Lee Kok Thong, and Dominic Paulger
You can refer to FPF’s previous blogs (here and here) for a brief history and overview of the DPDPA. ↩︎
Does the GDPR Need Fixing? The European Commission Weighs In
The European Commission published its second Report on the General Data Protection Regulation (GDPR) on July 25, 2024, assessing the progress of its impact and effectiveness of application since the Commission’s first Report published in June 2020. The second Report acknowledges relative success of the GDPR in protecting individuals and supporting businesses, while also highlighting areas for improvement, with further progress being called for in supporting stakeholders’ compliance efforts, clearer and more actionable guidance from data protection authorities (DPAs), and achieving more consistent interpretation and enforcement of the GDPR across EU Member States.
This blog surfaces key takeaways from the Commission’s second Report on the GDPR, with an overview and analysis of the findings from various stakeholders, including DPAs. The Report draws conclusions following the past years of GDPR enforcement and applicability, exploring enforcement and the use of cooperation and consistency mechanisms; implementation of the GDPR by Member States and an overview of the exercise of the data subject rights; the GDPR as a cornerstone of the EU’s new legislative rulebook; and international transfers and global cooperation.
1. Enforcement and the use of cooperation and consistency mechanisms are on a growth trend, bringing total fines of 4.2 billion EUR and increased use of corrective measures
In 2020, the Commission’s first Report highlighted the need for a more efficient and harmonized handling of cross-border cases across the EU, resulting in the 2023 Commission proposal for a Regulation on additional procedural rules currently being negotiated by EU legislators.
In its second Report, the Commission assessed recent enforcement activity under the GDPR, highlighting a trend of increased cooperation between DPAs, increased use of the GDPR consistency mechanism and the growing intervention of the European Data Protection Board (EDPB) via its Opinions, with the following highlights:
Almost 2400 case entries were registered in the EDPB’s information exchange system as of 3 November 2023;
Lead DPAs issued approximately 1500 draft decisions with over 990 resulting in final decisions finding GDPR infringements (as of 3 November 2023); and
DPAs from 7 Member States participated in 5 joint operations;
DPAs from 18 Member States raised 289 relevant and reasoned objections, 101 of which were raised by German authorities, with a success rate in reaching consensus varying from 15% (German authorities) to 100% (Polish DPA).
The cases submitted to dispute resolution addressed the legal bases for processing data for behavioral advertising on social media and processing children’s data online.
Regarding the consistency mechanism, the report notes that:
The EDPB has adopted 190 consistency opinions;
9 binding decisions were adopted in dispute resolution, with all instructing the lead DPA to amend its draft decision and others resulting in significant fines;
5 DPAs adopted provisional measures under the urgency procedure (Germany, Finland, Italy, Norway and Spain); and
2 DPAs requested an urgent binding decision by the EDPB under Article 66(2) GDPR, and the EDPB ordered urgent final measures in one case.
The Commission pointed to more robust enforcement activity by DPAs in recent years. DPAs use corrective measures and adopt infringement decisions in complaint-based and own initiative cases. The Report stated that DPAs have imposed “substantial fines in landmark cases against ‘big tech’”. For instance, DPAs have imposed over 6680 fines amounting to approximately EUR 4.2 billion, with Ireland accounting for the highest total fines (EUR 2.8 billion) followed by Luxembourg (EUR 746 million) and France (EUR 131 million). Liechtenstein, Estonia, and Lithuania were reported to have imposed the lowest fines, 9600 EUR, 201000 EUR, and 435000 EUR, respectively. The highest number of fines were imposed in Germany (2106) and Spain (1596). The fewest fines were imposed in Liechtenstein (3), Iceland (15) and Finland (20). Most fines were imposed for (i) infringement of the principles of lawfulness and security of processing, (ii) infringement of the provisions related to processing of special categories of personal data, and (iii) failure to comply with individuals’ rights (Chapter III of the GDPR).
The Report showed that DPAs effectively used “amicable settlement” procedures, with over 20,000 complaints resolved, even though such procedures are unavailable in all Member States. This procedure was commonly used in Austria, Hungary, Luxembourg, and Ireland.
Furthermore, DPAs launched over 20,000 own-initiative investigations and collectively received over 100,000 complaints yearly. In 2022, nine DPAs received over 2000 complaints. Germany (32300), Italy (30880), Spain (15128), the Netherlands (13133), and France (12193) registered the highest number of complaints, while Liechtenstein (40), Iceland (140), and Croatia (271) registered the lowest number. The median time to handle complaints from receipt to closure ranges from 1 to 12 months.
The Report notes that German DPAs launched the highest number of own-initiative investigations, 7647 investigations, followed by Hungary with 3332, Austria with 1681 and France with 1571 investigations.
Besides fines, DPAs used corrective measures such as warnings, reprimands, and orders to comply with the GDPR. In 2022, German DPAs adopted the highest number of decisions imposing corrective measures (3261), followed by Spain (774), Lithuania (308) and Estonia (332). The lowest number of corrective measures was imposed in Liechtenstein (8), Czechia (8), Iceland (10), the Netherlands (17) and Luxembourg (22). Controllers and processors frequently challenge decisions in national courts, most commonly on procedural grounds. For instance, in Romania, all 26 decisions finding an infringement were challenged before the national court, while in the Netherlands, the rate of challenge was reported to be 23%.
2.Implementation of the GDPR by Member States continues to be fragmented
Similar to the 2020 Report, stakeholders still reported fragmentation in the national application of the GDPR, from national legislation to diverging interpretations of the GDPR by DPAs. The concerns regard in particular:
The minimum age for a child’s consent in relation to the offer of information society services to the child;
Introduction by Member States of further conditions concerning the processing of genetic data, biometric data or data concerning health; and
Processing of personal data relating to criminal convictions and offenses.
However, the Report mentions that Member States consider that a limited degree of fragmentation may be acceptable. The specification clauses provided by the GDPR remain beneficial, particularly for processing by public authorities (the Council position states that “the margins left for national legislation to define specific framework for certain type of processing activities, for example when it comes to article 85 and 86 of the GDPR regarding the freedom of expression and information and the right of public access to official documents, remain beneficial and relevant notably for public authorities given the specificity of their processing activities”).
Notably, the Report points out that the interpretation of the GDPR by national DPAs remains fragmented as DPAs continue to adopt diverging interpretations of key data protection concepts, creating legal uncertainty and disrupting the free movement of personal data. Some of the specific issues raised by stakeholders include different views on the appropriate legal basis for processing personal data, diverging opinions on whether an entity is a controller or processor, and, in some cases, DPAs not following the EDPB guidelines or publishing conflicting national guidelines. Some stakeholders also consider that certain DPAs and the EDPB adopt interpretations that deviate from the risk-based approach of the GDPR, mentioning areas such as the interpretation of anonymization, the legal bases of legitimate interest and consent, and the exceptions to the prohibition of automated individual decision-making.
The Commission highlights that it monitors the implementation of the GDPR on an ongoing basis, having launched infringement procedures against Member States on issues concerning the independence of DPAs (e.g., Belgium) or the right to an effective judicial remedy where the DPA does not handle a complaint (e.g., Finland and Sweden). The Commission also regularly requests confidential updates from DPAs on significant cross-border cases, particularly those involving large tech companies.
3. Two-thirds of Europeans have heard of the GDPR, and they are increasingly exercising their Data Subject Rights
A noteworthy mention is that individuals are increasingly familiar with and actively exercise their rights under the GDPR: 72% have heard of the GDPR, with 40% knowing what it is. Awareness is highest in Sweden (92%) and lowest in Bulgaria (59%). Additionally, 68% are aware of a DPA responsible for data protection, with 24% knowing which authority it is. Awareness of DPAs is highest in the Netherlands (82%) and lowest in Austria (56%) and Spain (58%) (2024 Eurobarometer survey as referenced by the Commission’s report). While these statistics show an increased awareness of the existence of data protection rights, understanding of the GDPR still needs to be improved, as evidenced by many trivial or unfounded complaints received by DPAs.
Nonetheless, several user-friendly digital tools have been developed to make it easier for data subjects to exercise their rights. Additionally, by adopting the Data Governance Act the Commission hopes to increase the number of such tools. Industry stakeholders have stated that the right to erasure is increasingly used, while the right to rectification and the right to object are rarely used.
Right of access: The most frequently invoked is the right to access (Art. 15 GDPR). Controllers report that they are challenged with “unfounded or excessive requests”, managing high volumes of requests, and dealing with requests unrelated to data protection. Civil society organizations note that responses to access requests are often delayed or incomplete, while the data received is not always in a readable format. Public authorities claim to have difficulties with resolving the interaction between the right of access and rules on public access to documents.
Right to portability: The Commission has adopted initiatives that facilitate easier switching between services, supporting competition, innovation, and user choice on the right to data portability. The Report makes reference to the role of the Data Act in enhancing data portability for users of smart devices, requiring products or servers to support this technically, and to the Digital Markets Act, which mandates effective data portability for users of core platform services, particularly those provided by “gatekeepers”. Other initiatives, such as the Platform Work Directive, the European Health Data Space Regulation, and the Framework for Financial Data Access Regulation, aim to bolster portability rights in specific sectors. Interestingly, the Report does not include any data on portability-related requests under the GDPR or complaints related to portability.
Right to lodge a complaint: The large number of complaints received shows that there is broad awareness of the right to lodge complaints with DPAs. However, civil society organizations continue to point out inconsistencies in how complaints are handled across Member States. The Commission maintains that its legislative proposal on procedural rules should address these issues. Regarding collective redress, although few Member States have allowed non-profit bodies to take independent action under GDPR Article 80(2), the Representative Actions Directive, effective from June 2023, is expected to harmonize this process by facilitating collective actions for GDPR breaches.
Protection of children’s data: The EU and national authorities have increasingly implemented measures to safeguard children online, notably with the introduction of the Digital Services Act and its provisions to enhance children’s privacy and safety on online platforms. This policy priority has equally reflected in the data protection field, with DPAs working together to promote child protection in advertising and recently fining social media companies for GDPR violations when processing children’s data. Other key developments include the upcoming EDPB guidelines on children’s data processing, and the creation of a task force on age verification to support the development of an EU-wide approach to age verification, under the auspices of the Digital Services Act Board. Age verification will be included in the European Digital Identity Wallet, which should be available to all EU citizens and residents in 2026.
4. The position of DPOs and the availability of soft law tools need improvement
The Commission’s Report focuses on the GDPR’s role in establishing a level playing field, noting how companies have embraced an internal data protection culture, recognizing it as a key competitive factor, thanks to its flexible compliance framework through soft law tools such as Codes of Conduct, certification mechanisms, and standard contractual clauses (SCCs). However, several shortcomings are identified, both from the perspective of stakeholders and regulators. From companies, it is noted that the use of soft law tools needs improvement, arguing that the development of Codes of Conduct has been limited due to bureaucracy and lack of engagement from DPAs. In particular, SMEs report that, despite the benefits of tailored support by DPAs, they still perceive compliance as complex and fear enforcement, as inconsistent approaches remain across Member States. The report calls on DPAs to proactively engage more and provide practical tools and guidance.
EU data protection officers (DPOs) are also addressed by the Commission’s Report: despite being well-regarded as independent experts, several challenges are mentioned, such as difficulties in their appointment, lack of resources, additional non-data protection tasks, and insufficient seniority, with the EDPB calling for enhanced awareness-raising and support from DPAs to ensure that DPOs can effectively perform their duties under the GDPR.
5.The GDPR is described as a cornerstone for the EU’s new legislative rulebook in the digital sphere
Since the 2020 Report, several EU legislative initiatives have complemented or specified GDPR rules to address emerging areas, some of them being proposed specifically to enhance data sharing. The Commission highlights several files, some completed, some still under legislative action: the Digital Services Act, the Digital Markets Act, the AI Act, the Directive on Platform Work, the Political Advertising Regulation, the Interoperable Europe Act, the anti-money laundering package, the Data Governance Act, the Data Act, and the European Health Data Space. Notably, the Commission includes the proposed e-Privacy regulation among the digital policy initiatives building on the GDPR. The report highlights that all new legislation must align with the GDPR and the Court of Justice case law interpreting it.
With multiple digital rules on the horizon, cooperation across various regulatory areas, such as data protection, competition law, consumer law, and cybersecurity, is needed. In its Report, the Commission notes that close cooperation is crucial when addressing issues such as the compatibility of “pay or OK” models with EU law.
New digital regulations often establish specialized structures, such as the Digital Markets Act high-level group and the European Data Innovation Board, to coordinate enforcement. DPAs actively engage with other regulatory bodies through groups and task forces to ensure coherent and complementary actions. However, there is a need for more structured and efficient cooperation, especially for cross-border issues affecting many individuals, while ensuring that each authority remains responsible for compliance within their jurisdiction. The Report highlights that Member States should enhance national-level collaboration to support this.
6.Global ambitions continue with new adequacy decisions, trade agreements featuring data protection provisions, and enforcement cooperation agreements with third countries
The Commission assesses that, since 2020, the concept of “international transfers” under the GDPR has been updated to reflect the CJEU Schrems II ruling, which further clarified the level of protection provided by different transfer instruments to ensure that the GDPR is not undermined, as well as the assessment of the level of protection, with data exporters having to consider both the safeguards set out in the transfer instrument, as well as the relevant aspects of the legal system where the data importer is located. The Report also notes that the Schrems II ruling has also been reflected in the guidance of the EDPB, which updated its “adequacy referential”.
The Commission, therefore, provides a comprehensive update of the next steps in its global cooperation efforts since the Schrems II ruling. Following the invalidation of the adequacy decision for the EU-US Privacy Shield, the EU and the US developed the EU-US Data Privacy Framework: introduced by an Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities, the Commission followed suit, adopting an adequacy decision, with a first review set to take place in 2024.
New adequacy decisions in conformity with the latest interpretation have also been adopted, while others are expected soon: The Commission has adopted adequacy decisions for South Korea and the UK (with a “sunset clause” expiring in 2025). Adequacy talks are ongoing with Brazil, Kenya, and international organizations such as the European Patent Organisation. The Commission is also engaging with various countries globally to expand the network of adequacy decisions. Periodic reviews of existing decisions are also taking place, the most recent being Japan in 2024. The Commission also highlights the role played by these decisions as a strategic tool for improving EU relations and promoting regulatory convergence with third countries.
The Report calls for streamlining of the BCR approval process
The Report also praises the development of additional instruments beyond adequacy decisions, such as new SCCs, which introduce updated safeguards aligning with GDPR requirements, a modular approach offering a single entry-point covering various transfer scenarios, increased flexibility for the use by multiple parties, and a practical toolbox to comply with the Schrems II decision. The SCCs were welcomed by stakeholders, with feedback indicating that the SCCs remain the most used tool for transfers by EU data exporters.
The stakeholder feedback points out that model clauses are increasingly central to global data flows, with several jurisdictions having endorsed the EU SCCs as a transfer mechanism under their own data protection laws, with limited formal adaptations to their domestic legal order (for instance, the UK and Switzerland). Other countries have also adopted model clauses that share important common features with the EU SCCs (for example, New Zealand and Argentina). Moreover, the report exemplifies the creation of model clauses by other international and regional organizations or networks, such as the Council of Europe Consultative Committee of Convention 108, the Ibero-American Data Protection Network and the Association of Southeast Asian Nations (ASEAN), noting that this opens up new opportunities to facilitate data flows between different regions based on model clauses and providing the EU-ASEAN Guide on the EU SCCs and ASEAN model clauses as a concrete example.
In addition to SCCs, binding corporate rules (BCRs) remain prominent for data transfers between members of corporate groups or among enterprises engaged in a joint economic activity: since the adoption of the GDPR, the EDPB adopted 80 positive opinions on national decisions approving BCRs. However, the report calls on DPAs to streamline the BCR approval process, which stakeholders describe as long, complex, and detrimental to their broader adoption.
Privacy and Data Protection will Continue to be Featured in Trade Agreements
Highlighting the successful inclusion of data protection safeguards in recent EU agreements with, for example, the UK and Canada, the Report argues that integrating data protection safeguards within international agreements for ensuring effective and secure data flows will continue to be featured in further agreements, highlighting the Second Additional Protocol to the Cybercrime Convention, and the EU-U.S. bilateral negotiations on an agreement on cross-border access to electronic evidence for criminal matters.
The position of the Commission as a proponent of strong provisions to protect privacy and boost digital trade at the World Trade Organization in the ongoing negotiations on the Joint Statement Initiative on electronic commerce is also highlighted, noting that since the GDPR came into force, privacy and data flow provisions have been consistently included in EU free trade agreements, notably in the EU-UK Trade and Cooperation Agreement, in the agreements with Chile, Japan and New Zealand. At the same time, discussions are ongoing with Singapore and South Korea.
The Commission plans to negotiate enforcement cooperation agreements with third countries, such as the G7 members
The Report also details that the Commission has maintained an active role in global privacy discussions on a bilateral (i.e. national governments, regulators, international organizations and especially with EU candidate countries) and multilateral level (i.e., contributing to the Consultative Committee on the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108)), engaging in discussions at G20 and G7, and with regional organizations like ASEAN and the African Union). Over the following years, it remains to be seen how the Commission takes such engagement further, particularly with regard to negotiating enforcement cooperation agreements.
7. Concluding Reflections: next steps for the GDPR?
The report concludes that to achieve the twin goals of GDPR – strong protection for individuals while ensuring the free flow of personal data within the EU and safe data flows outside the EU – there needs to be a focus on:
Robust enforcement: accelerate the adoption of GDPR procedural rules;
Support: proactive support from DPAs to assist SMEs and stakeholders in GDPR compliance;
Consistency: ensure uniform GDPR interpretation and application across the EU;
Effective cooperation: enhance collaboration among regulators;
Global action: advance the Commission’s international strategy on data protection.
The Report notes that EDPB and DPAs are invited to fully use cooperation tools under the GDPR so that dispute resolution is used only as a last resort, and Member States are called to ensure that DPAs maintain full independence and receive adequate resources, including technical expertise, to address emerging technologies and new responsibilities in the context of a growing body of digital legislation. Within this ecosystem, the Commission will address the need for effective cross-regulatory cooperation to ensure consistent application of EU digital rules while respecting DPAs’ roles in the supervision of personal data processing.
Notably, after counting its successes and shortcomings in this second Report, the Commission is not calling for the reopening and updating of the GDPR.
Editors: Dr. Gabriela Zanfir-Fortuna, Bianca-Ioana Marcu
Privacy Roundup from Summer Developer Conference Season 2024
Ahh, summer. A time for hot dogs, swimming pools, and software developer conferences. For third-party application developers to deliver new tools with the best features for the lucrative fall quarter, they must have access to all the APIs and tools by the summer before. This has meant that early summer has become known as a time for announcements from the major big tech platforms.
Anyone even remotely adjacent to the tech industry can probably tell you the main takeaway emphasized by Google, Microsoft, and Apple in their respective developer conferences using just two words: Artificial Intelligence. If the last couple of years have been building hype for AI, this summer’s developer conference season may be seen as a turning point from research to reality, as all three companies emphasized significant investments to bring AI to practically every platform. Google, Microsoft, and Apple all announced major new developments and initiatives around AI that impact privacy.
Taken holistically, three main takeaways emerge for privacy professionals from the announcements made this summer, and we’re going to cover each of them. First, every platform will have some AI integrations that require privacy risk analysis. Second, privacy risks from AI are more likely to be realized because AI will be an integrated system-level feature rather than an application-level or user-level add-on. Third, major privacy-relevant announcements were not limited to AI, but include changes to password management and advertising on Apple systems.
AI is front and center for all platforms, with a significant focus on hardware advancements that can limit privacy risks
Google, Microsoft, and Apple each advanced a vision of multi-model AI as a central focus for developers and users of their platforms, including through deep integrations of AI into existing software and hardware. As the platforms prioritize AI, these updates will also impact the shape of privacy protections that users expect in years to come. For example, smaller AI models that can be executed locally and hardware advancements that enable on-device processing can limit privacy risks by eliminating the need to share data with cloud providers or third parties to take advantage of AI capabilities.
Google’s vision presented at I/O emphasized their development of LLMs at a variety of sizes, from Gemini Ultra (a large but slow model capable of handling inputs with multiple millions of tokens) to Gemini Flash (a lightweight, fast, and efficient model that is only capable of handling more limited inputs). Google also announced a series of LLM-based AI models designed to fill the gap between these two, including a general purpose model (Gemini Pro); an embeddable model that will be built directly into Google’s Chrome browser and could allow web developers to perform queries without requiring a network connection (Gemini Nano); a text-to-video generation model (Veo); and a new iteration on their text-to-image generation model (Imagen 3). Google engineers have also announced several open models (Gemma 2B and 7B; CodeGemma; and PaliGemma). The privacy tradeoff of model size is this: the smaller the model, the easier it is to operate locally. Large models are more efficiently operated in a cloud environment as a service, requiring data to be transferred to a third-party.
Google also emphasized the capabilities for each of their models. With the exception of Veo and Imagen 3, Google’s models are natively multi-modal. Multi-modality means that each tool will have the capacity to interact in text, images, audio, or other input modalities. This shift is part of a larger trend of integrating AI into a variety of form factors, that also brings new challenges related to transparency and accuracy. Google also emphasized context size for each model. Context size refers to the amount data that can be provided to an AI model, with a larger context size generally leading to more coherent and responsive results. Sissie Hsiao said during the Google I/O Keynote that this large context window will allow people to “tackle complex problems that were previously unimaginable.” The more capable the model, the more data privacy concerns are implicated because a more capable model can treat a wider range of data as valid inputs.
Each company made this clear and outlined the implications of this for developers using their tools. For example, any cloud-based approach to AI highlights the fundamental privacy tension at the core of AI-based computing: the more data the AI has access to, the better the results it can provide. On-device processing limits the personal data sent to third parties to produce AI-based results. However, an on-device approach is limited by the model size and computational capabilities of the hardware, but it can handle less complex queries with fewer privacy and security implications. Based on the announcements and developer tool lineups, all three companies understand and are attempting to account for these tradeoffs.
More AI tools being integrated as system-level features will bring novel privacy challenges for platforms
Google, Microsoft, and Apple have laid out a vision of AI that is deeply integrated into many products and features, including many system-level integrations. System-level integration, whether done with embedded AI models, hardware-supported AI, or operating system integrations, may bring benefits to both developers and users. Users may benefit from system-level summarization or re-writing tools, for example. Developers unfamiliar with AI but using system-provided software developer kits may be able to incorporate these integrations with minimal configuration and coding. At the same time, system-level AI integrations add challenges for platforms seeking to navigate how to communicate and record consent preferences for the flow of information needed to power such features, particularly in the context of workplace-assigned or government-assigned devices.
Microsoft’s hardware integrations and Windows integrations were central to their pitch to developers on their support for AI. Let’s start with hardware integration because more AI-capable local hardware means less data would have to leave the device for third party AI services. Microsoft is using the Snapdragon X Elite and Snapdragon X Pro line of chips on their newly-announced CoPilot Plus PCs and Surface Pro devices. For comparison, Apple’s M4 Neural Engine is capable of 38 trillion operations per second, whereas the neural processing unit in the Snapdragon X Elite is capable of 45 trillion operations per second. Microsoft’s support for and inclusion of this line of chips in their upcoming products signals both their seriousness about hardware integration for AI tasks and their recognition that on-device processing is a win for privacy and security.
The other clear focus of Microsoft’s announcements is Windows integration. Building AI into the operating system makes it easier for developers to take advantage of the technology and easier for users to have consistent expectations about how their data will be used. Nadella compared their announcement of the Windows Copilot Runtime, which is a system-level set of libraries that software developers can use to integrate AI into their native Windows applications, to the Win32 libraries that have been core to Windows application development since the mid 1990s. Better integration of AI leads to more use of AI, raising the stakes of AI-focused privacy risk analysis.
Similarly, Apple’s on-device processing can be seen in a handful of tools, including Image Playground, a tool for generating images in a restricted set of styles that is available system-wide and accessible anywhere that an image could serve as a valid input, including Messages. Apple also introduced on-device, system-wide, text tools for language, including proofreading, rewriting, and summarizing text. On-device photo and video editing and curation tools round out their consumer-facing take on AI. Note that these on-device AI examples are less open-ended and more task- or use case-oriented, making privacy tradeoffs clearer.
Apple’s changes to Siri are perhaps the clearest example of Apple’s focus on system integration. First released in 2011, Apple has announced major changes to Siri to support a more integrated user experience with two clear privacy protections for cloud-based AI. Apple’s first privacy protection is called Private Cloud Compute, which isolates computation to provide data protection during cloud-based computations. The details of this architecture are complex, but the goal is simple: to provide the most trustworthy “Apple Intelligence” experience possible. Apple’s second cloud-based AI privacy protection relates to their announced partnership with OpenAI to handle queries that cannot be performed within the Apple Intelligence ecosystem. Siri will prompt users before sending any data or queries to OpenAI, making users aware of any OpenAI processing before it happens.
Key data privacy principles, including data minimization, purpose limitation, and respect for data context (i.e., recognition of data as sensitive or non-sensitive) can sometimes be in direct tension with always-accessible AI services, particularly those that would send input information to third-party servers as context for an AI prompt. In some cases, AI features being announced will rely on strictly on-device processing or processing within a trusted execution environment. In others, however, the data may be sent to the platform to process queries or requests, but that transfer may not always be obvious with respect to basic system-level integrations, even if the transfer may contain confidential or personal information that would implicate data protection laws.
As AI services are more widely used, the amount and scope of data provided to them in the form of user queries from the products and systems that support them will grow, raising overall organizational risk while simultaneously making on-device processing a more valuable risk mitigation tool. Privacy professionals will have to consider carefully whether and how to enable these services for their organizations, especially with respect to workplace and government-assigned devices, while individuals will have to be cognizant of what data is required for their interactions with AI interfaces, particularly when working on a business-owned computer.
Major privacy announcements aren’t limited to AI
Amongst so much AI-related news, there were two significant announcements from Apple unrelated to AI but that directly impact privacy: Apple Passwords, and AdAttributionKit.
Apple introduced a new Passwords application, which replaced iCloud Keychain and competes more directly with third-party applications like LastPass and 1Password. Anyone interested in locking or hiding applications on their iOS device, will soon have the ability to hand their phone to someone else and be assured that sensitive data and applications will remain protected. Passkeys will get another opportunity to replace passwords as Apple will enable by default a new feature to automatically transition from passwords to passkeys on iOS and macOS.
Finally, an Apple announcement with serious impact for privacy professionals: Apple introduced AdAttributionKit, which introduces a new approach for advertising attribution on both iOS and the web. It can be configured to work with SKAdNetwork but it has been received as a replacement for all attribution functions. All data involved is subject to “crowd anonymity,” which is Apple’s approach to privacy protection by adding statistical noise to potentially identifiable data. Apple has also made this framework app store agnostic, which means that it should allow attributions for advertisements on apps installed via alternative app marketplaces. This aligns with efforts from other large platforms to navigate new solutions for advertising that are less reliant on sharing third-party data across the advertising ecosystem. At the same time, it solidifies some of the differences between Apple’s approach and that taken by Google, which recently announced a shift in direction for deprecation of third party cookies.
Summary
Major developer conferences showcased AI as the dominant theme this summer, with Google, Microsoft, and Apple each announcing significant AI integrations across their platforms. Privacy professionals face challenges in assessing AI-related privacy risks, and those challenges must be addressed as AI transitions from isolated applications into deeply embedded system functions.
FPF Highlights Intersection of AI, Privacy, and Civil Rights in Response to California’s Proposed Employment Regulations
On July 18, the Future of Privacy Forum submitted comments to the California Civil Rights Council (Council) in response to their proposed modifications to the state Fair Employment and Housing Act (FEHA) regarding automated-decision systems (ADS). As one of the first state agencies in the U.S. to advance modernized employment regulations to account for automated-decision systems, the Council is likely to influence how other states, regulators, and policymakers consider how existing civil rights and data privacy laws apply to artificial intelligence.
In order for these regulations to provide clarity and constructive guidance within existing laws and frameworks for organizations and individuals alike, including California’s consumer privacy laws, FPF provided four recommendations to the Council:
1. Definition Alignment: The Council’s definition of “automated decision system” should align with similar regulations at the state and federal levels to facilitate greater clarity and compliance.
2. Role-Specific Responsibilities: The Council should create legal standards for when a developer of an AI system becomes an agent or employment agency, accounting for role-specific responsibilities and capabilities in the AI system lifecycle.
3. Data Retention and Privacy: Data retention and record-keeping requirements should be reasonable and align with California consumers’ rights to data privacy and data minimization.
4. Additional AI Governance Measures: The Council should conduct additional inquiries about the use of ADS and existing civil rights laws, including assessing whether automated systems are fit for purpose.
Each is summarized below in brief. For more information, you can read FPF’s full comments to the Council here.
Definition Alignment
With at least four California state governing bodies—the Council, California Privacy Protection Agency, California Government Operations Agency, and the California Legislature—considering regulatory actions on automated decision-making technology, consistent terminology across regulations enhances AI governance and prevents conflicts that could arise from divergent definitions. To ensure focus and regulatory efforts are targeted toward technologies that play an impactful role in individuals’ rights, FPF recommended alignment with definitions from Government Code § 11546.45.51, the CPPA Draft Regulations, and Assembly Bill 2930 that require the ADS role be “substantial” to the decision-making process.
A computational process that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decisionmaking that impacts applicants or employees.
Any technology that processes personal information and uses computation to execute a decision, replace human decision-making, or substantially facilitate human decisionmaking.
“High-risk automated decision system” means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, housing or accommodations, education, employment, credit, health care, and criminal justice.
A system or service that uses artificial intelligence and has been specifically developed to, or specifically modified to, make, or be a substantial factor in making, consequential decisions.
Role-Specific Responsibilities
ADS governance structures and corresponding accountability mechanisms should account for developers’ and deployers’ role-specific responsibilities. As explained in FPF’s Best Practices for AI and Workplace Assessment Technologies, “Developers and Deployers each have important roles in ensuring that Individuals understand when — and to what extent — AI tools have Consequential Impacts…[and p]articular disclosures should be provided by the entity that is best positioned to develop the content of the disclosure and communicate it to Individuals.” Establishing a legal standard in the proposed modifications would help clarify the degree of involvement, control, and influence required for an AI developer to become accountable for discriminatory outcomes based on the role and capability-specific responsibilities of developers and deployers and their relationship with one another.
Data Retention and Privacy
To minimize the risk of individuals’ personal data being misused or breached and uphold California citizens’ privacy rights, FPF recommends the Council should align and clarify the proposed regulations’ record and data retention requirements with existing privacy rights and obligations under the California Consumer Privacy Act (CCPA), California Privacy Rights Act (CPRA), and regulations set forth by the CPPA. As proposed, the modifications’ retention requirements for employers and developers may not only violate California data minimization principles, but they also raise questions about whether they are meant to override or cede to existing California privacy rights to delete such data or opt-out of automated decisionmaking technology.
Additional AI Governance Measures
Finally, ADS should not perpetuate discrimination or exacerbate harm, but updates to existing employment regulations may not be enough to mitigate all forms of discriminatory conduct or provide sufficient guidance. We recommend that the Council make additional inquiries to understand the use of ADS and the impact of existing civil rights laws. To prevent discriminatory effects and overall harm, AI tools must be validated and tested to ensure they solve the problems they are designed for. FPF acknowledges that discrimination can arise not only from faulty or inaccurate systems but simply because an AI system is not fit for its intended purpose. Accordingly, the Council should consider existing AI governance measures, such as “fit for purpose” tests, that further support civil rights protections and account for the limitations of AI.
FPF Responds to the Federal Election Commission Decision on the use of AI in Political Campaign Advertising
The Federal Election Commission’s (FEC) abandoned rulemaking presented an opportunity to better protect the integrity of elections and campaigns, as well as to preserve and increase public trust in the growing use of AI by candidates and in campaigns. When generative AI is used carefully and responsibly, it can reach different segments of the population and address the needs and concerns of specific groups and populations. However, generative AI also carries the potential to erode public trust and damage the integrity of campaigns, elections, and campaign communications. The FEC must consider opportunities to encourage the responsible use of generative AI to mitigate the risks that it may pose to democracy, including its potential to amplify pre-existing discrimination and inequitable practices.
– Amie Stepanovich, VP for U.S. Policy, Future of Privacy Forum
FPF previously submitted comments to the FEC on the use of AI in campaign ads, drawing from an op-ed by FPF’s VP for U.S. Policy, Amie Stepanovich & Policy Counsel for AI, Amber Ezzell, in which they explained how generative AI can be used to manipulate voters and election outcomes, and the benefits to voters and candidates when generative AI tools are deployed ethically and responsibly.