Event Report: FPF Side Event and Workshop on Privacy Enhancing Technologies (PETs) at the 2022 Global Privacy Assembly (GPA)
The 2022 Global Privacy Assembly (GPA) – which brings together most global data protection authorities (DPAs) every year since 1979, to share knowledge and establish common priorities among regulators – took place between October 25 and 28, in Istanbul (Türkiye). The Future of Privacy Forum (FPF) was invited by the organizers of the GPA (the Turkish DPA) to host a two-part side event during the GPA’s Open Session (on October 25 and 26), in addition to a capacity building workshop for regulators during the Closed Session (on October 28).
These sessions covered the topic of Privacy Enhancing Technologies from three different approaches:
The regulators’ take: PETs are promising, but no silver bullet. The first part of the FPF Side Event offered the regulators’ perspective, and was titled ‘Regulatory Views on the Role and Effectiveness of PETs’. The session was moderated by Limor Schmerling Magazanik, Director of the FPF-affiliated Israel Tech Policy Institute (ITPI), and counted on the contributions of Rebecca Kelly Slaughter (Commissioner at the US Federal Trade Commission, or ‘FTC’), Tobias Judin (Head of the International Department of the Norwegian DPA, the ‘Datatilsynet’), Gilad Semama (Privacy Commissioner of Israel), and Vitelio Ruiz Bernal (Director of Supervision at the Mexican DPA, the ‘INAI’).
The view of practitioners: a call for regulatory clarity and predictability. The second part of the Side Event was entitled ‘Lessons Learned from Implementing PETs’, and saw various privacy leaders from the industry share their experiences of leveraging PETs in their compliance efforts. The panel was moderated by FPF’s CEO Jules Polonetsky and had as panelists Anna Zeiter (Chief Privacy Officer at eBay), Emerald De Leeuw-Goggin (Global Head of Privacy at Logitech), Barbara Cosgrove (CPO at Workday), and Geff Brown (Associate General Counsel at Microsoft).
FPF’s capacity building workshop. The FPF workshop during the GPA’s Closed Session was conducted by FPF’s Vice President for Global Privacy, Dr. Gabriela Zanfir-Fortuna, and Managing Director for Europe, Dr. Rob van Eijk. The session covered the legal qualification of PETs under the EU’s data protection framework – as well as how they could be leveraged to attain compliance with it -, as well as a primer on three PETs, i.e., Differential Privacy, Synthetic Data, and Homomorphic Encryption. This workshop was a condensed version of the Masterclass that FPF hosted at the 2022 Computers, Privacy and Data Protection (CPDP) Conference last May (recorded).
Below we summarize the discussions in the two FPF Side Events with regulators and privacy leaders and highlight key takeaways.
The regulators’ take: PETs are promising, but no silver bullet
Moderator Limor Schmerling Magazanik opened the first discussion by observing that regulators have a dual role regarding PETs: issuing guidance to clarify when and how PETs should be deployed in different scenarios to ensure compliance with privacy laws; and providing tailored advice to lawmakers that wish to promote the use of PETs for the pursuit of public interest tasks and the responsible use of data.
On this note, Gilad Semama noted that PETs seem to present solutions for combining innovation in the tech sector with the protection of privacy as a Constitutional right in Israel. Semama highlighted that companies have expressed their need for certainty on how they can use PETs to achieve compliance with the privacy framework. The speaker added that it is challenging to find a one-size-fits-all solution in this respect, but that the Privacy Commissioner is trying to pass flexible guidance and answer the public’s queries on PETs for the benefit of businesses and DPOs, by referring to accountability and helping them choose the most appropriate PET for specific use cases. According to Semama, PETs should be complemented with other data security solutions to provide meaningful protections. On the other hand, he noted that companies that are developing PETs in Israel need access to funding and that a recent joint project from the regulator and the Innovation Authority of Israel may be of help.
Next up, Rebecca Kelly Slaughter stressed the potential that PETs might offer in promoting competition and consumer protection, as they can represent innovation and a positive metric of competition. However, some applications of PETs can be misleading and competition-inhibiting. This means that, according to Slaughter, the value of PETs should be assessed against their concrete effects. The Commissioner stated that the FTC should mainly focus on providing guidance to assist businesses developing and implementing PETs through FTC rulemaking, instead of strict enforcement. However, the FTC will not approve broad safe harbor provisions for the use of specific PETs, as their effectiveness is generally context-specific.
Slaughter suggested PETs could enable the implementation of privacy-preserving age verification systems, although the FTC is yet to see such a solution. This would enable businesses to move away from notice and consent-based standards regarding the processing of children’s data, which is one of the current aims of the FTC. According to Slaughter, consent does not provide adequate protections to children’s online privacy, and providers should rather focus on data minimisation, purpose and storage limitation.
The FTC is currently receiving comments to its proposed Consumer Surveillance and Data Security Rulemaking, which also touches on PETs. The contributions to the public consultation promise to offer a compendium of perspectives for several stakeholders to tap into when developing and implementing PETs. In addition, Slaughter admitted that the FTC needed to build collaboration with and draw inspiration from regulators in different jurisdictions, also when it comes to issuing enforcement orders. As companies will roll-out PETs across borders, consistent regulatory approaches will increase the likelihood of broad uptake of PETs by small and large players.
Tobias Judin followed up on Slaughter’s comments, by saying that, when it comes to greenlighting PETs, DPAs should explain that companies do not need to choose between data collection and privacy, or between innovation and data protection. Judin used health research as an example, outlining that often researchers need to collect data about rare diseases across jurisdictions to make the dataset more representative, even knowing that the level of data protection is not equivalent in all targeted countries. In that context, PETs such as homomorphic encryption or differential privacy may provide reassurance to research subjects. Judin also stressed that confidential computing can mitigate security vulnerabilities that often exist when research data is stored on premises and not in the cloud.
Judin also elaborated extensively on federated learning, which allows controllers to check their data processing systems for bias through careful analysis of larger datasets. He stated that the application of federated learning to an AI model’s training data can be done within users’ devices. He gave the example of Google’s GBoard, which enabled the company to make predictions about what individuals wanted to type, without the data leaving their device.
Another example is how the Norwegian DPA advised banks within its regulatory sandbox for responsible AI to cooperate when training their money-laundering detection algorithms. As banks do not normally have enough ‘suspicious’ customers to train their detection algorithms, they tend to be overzealous, which leads to false positives and data protection issues. However, the DPA noted that banks could cooperate in the development of a more effective algorithm without sharing raw data about their customers by using differential privacy, as long as they prevented model inversion attacks. The DPA also conceded that banks needed to tweak the model and the underlying training and input data as they went along to ensure the algorithm’s effectiveness, which should reassure diligent AI developers against the risk of fines.
Lastly, Vitelio Ruiz Bernal stressed the importance of helping businesses achieve security standards that can help them comply with data protection law. In this respect, he mentioned the INAI’s data protection laboratory, which is dedicated to analyzing apps and web-based applications that are subject to a black-box. The INAI has found that processors which assist controllers in those contexts are often under-resourced and reluctant to use PETs due to their perceived high costs. Bernal revealed that the INAI is currently looking for public-private collaborations to develop accessible PETs and to issue guidelines on specific PETs (e.g., encryption), also inspired by the work of the Berlin Group on the matter. Given Mexico’s specific legal requirements in terms of cloud service security, Bernal mentioned that PETs could potentiate the uptake of cloud services by increasing trust among stakeholders.
The view of practitioners: a call for regulatory clarity and predictability
To frame the second panel of the Side Event, Jules Polonetsky reflected on the privacy community’s eagerness to learn about how industry privacy leaders are integrating PETs into their compliance strategies, their successful and less successful stories. On the other hand, Jules queried the panelists about the actions they would like to see from regulators and policymakers in this space to promote the uptake of PETs.
Anna Zeiter revealed that eBay has had meetings with its lead DPA in Germany about how PETs could help them comply with the Court of Justice of the European Union (CJEU)’s Schrems II ruling on international data transfers, in particular on the implementation of supplemental measures in accordance with the European Data Protection Board (EDPB)’s guidance. In that context, the DPA focused on measures such as tokenization and encryption (in transit and at rest).
Zeiter highlighted the UK Information Commissioner’s Office (ICO)’s PETs guidance, and said this constituted an opportunity for other regulators to evaluate where they stand on the matter. The speaker also called for a global alignment from DPAs, because companies will implement PETs across very different jurisdictions. Zeiter claimed that, for companies to know whether they should invest in PETs, regulators need to give them reassurance, for example in the form of some sort of PET ‘whitelist’ in particular contexts of application. Additionally, Zeiter underlined that companies that develop and use PETs and their DPOs have a role in educating regulators, which was echoed by a DPA official in the room.
Emerald De Leeuw-Goggin mentioned Logitech’s offerings of PETs as a service for its internal teams of software developers. According to the speaker, this involved making PETs more accessible and scalable within the wider decentralised organisation, the development of privacy engineering capabilities, and the buy-in of Chief Technology Officers. De Leeuw-Goggin noted that PETs are still not mainstream enough for an SME owner to feel confident investing and implementing them, also due to the existing skills gap in the field. As PETs become mainstream, they will also become more understandable and usable across sectors and company sizes.
Barbara Cosgrove stated that B2B companies like Workday tend to receive questions from their customers on how to best implement PETs into their software solutions. This includes masking or pseudonymizing data, or limiting employee access to data. Sometimes, more sophisticated measures – like differential privacy – could be adequate, but companies are reluctant in investing resources in the absence of regulatory clarity, particularly on de-identification. Cosgrove agreed that businesses and regulators need to put their brains together in developing use cases and standards that would increase legal certainty around the effective use of PETs. Co-regulatory solutions like Codes of Conduct could facilitate demonstrations that PETs are used in a compliant manner.
Finally, Geff Brown highlighted how differential privacy has become usable in multiple apps, allowing providers to process aggregated telemetry data at scale for analytics. Microsoft is using the technique to improve its Natural Language Processing models, including text and speech prediction. In that context, differential privacy allows companies to demonstrate the accuracy of the model without compromising individuals’ privacy. Brown argued that tech savvy companies need to better explain PETs to consumers and corporate customers, but that standardization efforts and favorable DPA positions can also help. In this context, Geff wished for an EDPB update to the 2014 guidance on anonymization, and to have regulators carry out PET testing and share the results with the public, thereby increasing knowledge and trust in the technologies.
Call for Nominations Open: FPF’s Award for Research Data Stewardship
When companies share data with researchers in a way that protects data, the collaboration can unlock new scientific insights and drive progress in medicine, public health, education, social science, and many other fields.
FPF is thrilled to announce the open nomination period for FPF’s 3rd Annual Award for Research Data Stewardship. The Award recognizes partnerships between companies and research institutions where a company shares data it holds in a privacy-protective manner with a researcher or research team for scholarly publication.
An example of an extraordinary award-winning partnership between researchers and a company to advance scientific and medical progress to benefit society through privacy-protective research data sharing is Stanford Medicine researchers and medical wearable and digital biomarker company Empatica. This award-winning collaboration studied whether data collected by Empatica’s researcher-friendly E4 device, which measures skin temperature, heart rate, and other biomarkers, could detect COVID-19 infections before the onset of symptoms.
The award is presented to the company and its academic partner based on several factors, including the adherence to privacy protection in the sharing process, the quality of the data handling process, and the company’s commitment to supporting academic research.
The Award is a part of FPF’s “Corporate Data Sharing for Research: Next Steps in a Changing Legal and Policy Landscape” project to accelerate the safe and responsible sharing of administrative data between companies and academic researchers. This project is supported by the Alfred P. Sloan Foundation, a not-for-profit grantmaking institution whose mission is to enhance the welfare of all through the advancement of scientific knowledge.
GDPR and the AI Act interplay: Lessons from FPF’s ADM Case-Law Report
In May 2022, the Future of Privacy Forum (FPF) launched a comprehensive Report analyzing case-law under the General Data Protection Regulation (GDPR) applied to real-life cases involving Automated Decision-Making (ADM). Our research highlighted that the GDPR’s protections for individuals against forms of ADM and profiling go significantly beyond Article 22 – which provides for the right of individuals not to be subject to decisions based solely on automated processing that produces legal effects or significantly impacts them, and are currently being applied by courts and Data Protection Authorities (DPAs) alike. These range from detailed transparency obligations to applying the fairness principle to avoid situations of discrimination and strict conditions for valid consent in ADM cases.
As EU lawmakers are now discussing the amendments they would like to include in the European Commission (EC)’s Artificial Intelligence (AI) Act Proposal, what lessons can be drawn from GDPR enforcement precedents–as outlined in the Report–when deciding on the scope and obligations of the Act?
This blog will explore: the link between the GDPR’s provisions as relevant for ADM and the AI Act Proposal (1); how the AI Act’s concepts of providers and users fare compared to the GDPR’s controllers and processors (2); how the AI Act facilitates GDPR compliance for the deployers of AI systems (3); the opportunities to enhance or clarify obligations under the AI Act through the lens of ADM jurisprudence (4); the overlaps between GDPR enforcement precedents and the AI Act’s prohibited practices or high-risk use cases (5); the issue of redress under the GDPR and the AI Act (6); and a compilation of lessons learned from the FPF Report in the context of the debates around the AI Act (7).
Note: when referring to case numbers in this blog, the author is using the numbering of cases in the FPF Report.
Both the GDPR and the proposed AI Act are grounded on Article 16 TFEU for the protection of personal data
One of the two legal bases used by the EC to justify the AI Act Proposal is Article 16 of the Treaty on the Functioning of the European Union (TFEU), which mandates the EU to lay down the rules relating to the protection of individuals with regard to the processing of personal data. This means that, at least to some extent, the AI Act’s rules would complement the protections afforded to data subjects under the GDPR, which is also based on Article 16 TFEU. In fact, in their 2021 Joint Opinion on the AI Act, the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) have suggested making GDPR compliance a precondition for allowing an AI system to enter the European market as a CE marked product under the AI Act.
AI systems that would be regulated under the initial Proposal of the AI Act are the ones that rely on the techniques and approaches mentioned in Annex I of the Proposal (such as machine learning and logic-based approaches). Such techniques and approaches could constitute or enable ADM schemes to be implemented by controllers covered by the GDPR. However, the AI Act is generally agnostic regarding the decision-making scheme vis-à-vis individuals when the deployment of an AI system is at stake. This means that its scope is broader than ADM falling in or outside of the Article 22 GDPR prohibition (i.e., ‘qualifying’ or ‘non-qualifying’ ADM).
As an illustration, Article 3(1) AI Act mentions that software that can produce “predictions and recommendations” – which courts and DPAs have generally not considered to be fully automated decision-making thus far – may constitute AI systems covered by the AI Act if they use one or more of the techniques and approaches mentioned in Annex I.
Moreover, the AI Act’s rules under Title III, Chapter 2 and 3 on high-risk AI systems (i.e., the ones that are intended to be used as a safety component of a product, or are themselves products, covered by the Union harmonization legislation listed in Annex II, or that fall under the Annex III list) apply to the systems’ providers, users, importers or distributors even if the final decision that affects a natural person should be taken by a human for the user on the basis of the suggestions, signals, prompts or recommendations provided by the AI system.
However, even where the AI Act does not provide for specific protections for the rights of individuals where AI systems are underpinned by, or result in solely ADM having a legal or similarly significant effect on them, the safeguards provided by Article 22 GDPR will nevertheless kick in. On the other hand, and as mentioned in the Report, even in cases where Article 22 GDPR does not apply to a particular AI system, the rest of the GDPR applies to non-qualifying ADM and generally to the processing of personal data via AI systems. This could include the AI system’s training, validation and input data, as well as the AI system’s outputs, if they qualify as ‘personal data’ under the GDPR, regardless of whether the data are processed by the system’s providers or users. It is also noteworthy that there may be instances of ADM covered by the GDPR that do not involve any use of AI systems, but rather other forms of automating decisions.
The AI Act’s users are generally the GDPR’s controllers in the AI system’s deployment phase
Instead of focusing on the role of the parties in the AI value chain with regard to the processing of personal data, the AI Act Proposal focuses on the entities which develop and market or that use AI systems for commercial purposes, i.e. ‘providers’ and ‘users’ respectively. Each is assigned specific obligations, but most of the regulatory burden is placed on providers, in particular when it comes to high-risk AI systems (HRAIS) and their conformity assessments.
This way of defining the main actors subject to obligations under the future AI Act may create inconsistency with the GDPR’s definitions, roles, and responsibilities for covered entities. It has also led the EDPB and the EDPS to ask EU policymakers to ensure the AI Act’s obligations are consistent with the roles of controller and processor when personal data processing is concerned. Indeed, ‘providers’ under the AI Act will likely not be considered the data controllers under the GDPR during their AI systems’ deployment phase.
Under the GDPR, the bulk of obligations, liability and accountability for how personal data are processed are assigned to ‘controllers’. It is the AI Act’s ‘users’ who will rather be the ‘controllers’ under the GDPR in the deployment phase of AI systems. Therefore, even if they have a very limited set of duties under the Act (e.g., Article 29), they maintain liability and accountability about how personal data is used by, or resulting from the use of AI systems, under the GDPR. It is more likely that providers would be qualified as “processors” under the GDPR, processing personal data on behalf of users, notably if they provide support or maintenance services for AI systems involving the processing of personal data on users’ behalf and under their instructions.
The situation is different in the development phase of AI systems, where ‘providers’ will likely be considered “controllers” under the GDPR whenever they build AI systems relying on collection, analysis, or any other processing of personal data. The same will be the case for the testing phase of AI systems (e.g. bias monitoring) and for post-market monitoring purposes, which may be legally mandatory under Articles 10 and 61 of the AI Act. In these cases, the status of the provider as controller would derive from the law that imposes data processing duties (i.e., the AI Act), as mentioned in EDPB guidance on the concept of controller (para. 24).
Complex questions further arise in relation to potential joint controllership situations under the GDPR, between “users” and “providers” under the AI Act, in the deployment phase of AI systems. For instance, does the legal obligation that providers have under the AI Act to determine the AI system’s intended purpose and technical data collection lead to the qualification of providers as joint controllers with their customers (users), even if they do not obtain access to the AI system’s input or output data, especially after the Court of Justice of the European Union (CJEU)’s Jehovan todistajatruling?
The AI Act facilitates to a certain extent GDPR compliance for ‘users’ of AI systems
References to GDPR compliance in the AI Act proposal are scarce. An example is the authorization granted to providers to process special categories of data covered by Articles 9(1) and 10 GDPR when conducting bias monitoring, detection, and correction in relation to HRAIS (under Article 10(5) of the Act). Another is the obligation for users to use the information received from providers’ HRAIS instructions of use when carrying out Data Protection Impact Assessments (DPIAs) under the GDPR, as per Article 29(6) AI Act. However, as the crux of GDPR obligations for controllers in the commercial deployment phase of AI systems will arguably rest with the systems’ users, it is worth exploring whether the AI Act’s obligations imposed on providers of HRAIS may put users in a better position to comply with the GDPR.
Under Article 13 AI Act, providers have extensive transparency obligations towards their customers (i.e., ‘users’ and, most likely, GDPR ‘controllers’ as explained above), with a view to enable the latter ‘to interpret the system’s output and use it appropriately.’ This transparency comes in the form of instructions of use that should specify, inter alia, the HRAIS’s intended purpose, level of accuracy, performance, specifications for input data, and implemented human oversight measures (as detailed in Article 14 AI Act). Additionally, the HRAIS’s technical documentation that the provider is required to draw up under Article 11 AI Act – and whose elements are listed under Annex IV AI Act – will provide insight about the HRAIS’s general logic, key design choices, main classification choices, relevance of the different parameters, training data sets, and potentially discriminatory impacts, among other features.
Regardless of the wording under Article 29(6) AI Act, users may use the information obtained from providers under Article 13 AI Act and the HRAIS’s technical documentation not only to comply with their duty to carry out a DPIA but to ensure broader alignment with the GDPR and its transparency imperatives. Such information may also prove useful to comply with other GDPR obligations, such as providing notice to data subjects about profiling and ADM and to complete their records of AI-powered data processing activities under Article 30 GDPR.
However, it should be noted that, under the AI Act Proposal, duties for providers only exist with regards to HRAIS, whereas the users/controllers’ above mentioned obligations under the GDPR may apply even where the underlying AI systems are not qualified as such. For example, under the EDPB’s criteria, controllers could still be obliged to carry out a DPIA on an AI system that is not included in the Annex III AI Act list of HRAIS, such as a social media recommender system or an AI system used in the context of online behavioral advertising.
Specifically, with regards to controllers’ notice obligations on profiling and ADM, the FPF Report shows that DPAs and courts in Europe agree that Articles 13(2)(f) and 14(2)(g) GDPR provide for an ex-ante obligation to proactively inform data subjects about the system’s underlying logic, significance and envisaged consequences, and not about the concrete automated decisions that affect them. On the other hand, an obligation to provide decision-level explanations exists where the data subject exercises his or her data access right under Article 15(1)(h) GDPR, as is illustrated by two Austrian DPA decisions (cases 14 and 21 in the Report) and an Icelandic DPA decision (case 38 in the Report). In such instances, DPAs ordered controllers to disclose specific elements of information regarding automated credit or marketing scores attributed to data subjects, notably the algorithm’s parameters or input variables, their effect on the score, and an explanation of why the data subject was assigned a particular score.
Thus, when complying with the GDPR’s transparency obligations, controllers who qualify as users under the AI Act would find immense value in leveraging the sort of information that Articles 11 and 13 of the AI Act mandate providers to make available with regards to their HRAIS. Recent GDPR case law on ADM could make a case for extending providers’ transparency duties beyond HRAIS, and for ensuring the standard of intelligibility for the information AI providers should make available to users is one that enables the latter to comply with their GDPR obligations.
A possible avenue to be considered through the legislative process would be to create a general duty under the AI Act for providers to assist users in their GDPR compliance efforts in relation to the AI systems they sell, even in cases where providers would not act as data processors for the users. Some impetus for this approach may be found in Article 9(4) AI Act, which mandates providers to inform users about the risks that may emerge from the use of the AI system, and to provide them with appropriate training.
The GDPR’s ADM case law calls for further development or clarification of obligations under the AI Act
Some of the decisions analyzed in the FPF Report may provide indications that the AI Act’s obligations for providers and users – at least when HRAIS are at stake – need further development or clarifications.
Accuracy and transparency:In a landmark ruling, the Slovak Constitutional Court (case 4 in the Report) established that local law should require additional measures to protect individuals when automated assessments are carried out by State agencies. According to the Court, such measures could include: (i) checking the AI system’s quality, including its error rate; (ii) ensuring that the criteria, models, or underlying databases are up-to-date, reliable, and non-discriminatory; and (iii) making individuals aware of the existence, scope and impacts of automated assessments affecting them.
Measures (i) and (ii) seem to be very close to the data quality, accuracy, robustness, and cybersecurity requirements proposed under Articles 10 and 15 AI Act. However, these obligations are geared towards HRAIS’s providers, and not users/controllers. In its decisions against Deliveroo and Foodinho (cases 3 and 6 in the Report), the Italian DPA fined the controllers for not verifying the accuracy and correctness of their automated rider-management decisions and underlying datasets, although these are not explicit requirements under the GDPR’s Article 22(3). Therefore, and at least for HRAIS, the EU legislator could eliminate legal uncertainty by incorporating data quality and accuracy requirements into Article 29, which sets out users’ obligations in this context. Such a requirement could go beyond merely checking whether the HRAIS’s input data is relevant in view of the intended purpose, as Article 29(3) AI Act currently requires.
With regards to measure (iii), it should be noted that making individuals aware of the scope and impact of an automated assessment that falls outside of Article 22 GDPR goes beyond Articles 13(2)(f) and 14(2)(g) GDPR. As a rule, DPAs and courts have agreed with the EDPB by stating that the detailed transparency requirements under said provisions only apply to ‘qualifying’ ADM (see Chapter 1.6.c of the Report). Additionally, the Slovak Constitutional Court’s requirement goes further than Article 52 AI Act, which contains disclosure duties for users who deploy certain AI systems that the Proposal considers to be ‘low-risk’, such as emotion recognition systems, biometric categorization systems, and ‘deepfakes’. In the initial text of the AI Act, there are no transparency requirements towards affected persons when HRAIS are at stake, and there are no such obligations for non-high-risk systems other than the ones set out in Article 52 AI Act. Incorporating transparency rights for affected persons in the AI Act, even if only in HRAIS use cases, can reduce information asymmetries between individuals and organizations when decisions are not fully automated (e.g., an AI system whose recommendations merely support human decision-making).
Lawful grounds for data processing: the AI Act sporadically mentions the interplay with the GDPR’s rules on lawful grounds and exemptions from the prohibition on processing special categories of data. Most notably, Article 54 AI Act creates stringent conditions for further processing of personal data in the context of AI regulatory sandboxes, which do not have an obvious connection to the purpose compatibility test in Article 6(4). Additionally, Article 10(5) AI Act authorizes providers of HRAIS to tackle potential biases through the processing of special categories of data, as long as appropriate safeguards are in place. However, it fails to elaborate on the conditions for the collection of personal data from publicly available sources for mandatory training, validation, and testing of HRAIS. The narrow interpretation of ‘manifestly making data public’ assumed by the EDPB and (more recently) by Advocate General Rantos of the CJEU, together with the enforcement actions against Clearview AI (cases 10 to 13 in the Report), may significantly hinder the possibilities for AI providers to scrape data from the web to test their AI models against bias. Obtaining consent from data subjects for those purposes is often unfeasible, and the legitimate interests lawful ground often plays a limited role when sensitive data are at stake.
Article 10(5) of the AI Act could also potentially facilitate compliance with Article 9(2)(g) GDPR, which allows for the processing of special categories of personal data where their processing is necessary for reasons of substantial public interest, as long as it is based on Union or Member State law. Provided that countering bias would be qualified as “substantial public interest”, the Union law specifically providing for an obligation to process sensitive data – which in this case would be the AI Act, needs to provide for suitable measures to safeguard fundamental rights.
This complexity could offer an opportunity for the EU legislator to set boundaries and clear rules on the collection and use of personal data for training, validation, and testing of AI systems, at least for HRAIS.
AI risk management through the lens of ADM: Article 9 AI Act forces providers to establish and maintain a risk management system in relation to their HRAIS, including the ‘identification and analysis of [their] known and foreseeable risks.’ National court decisions on ‘legal or similarly significant effects’ of ADM under Article 22 GDPR may provide useful criteria that providers should consider when conducting risk analysis of HRAIS that could affect natural persons. In its Uber and Ola rulings (cases 17 to 19 in the Report), the District Court of Amsterdam analyzed the impact on drivers of the algorithms the companies were relying on for the functioning of their respective mobile applications providing ride-hailing services. The Court looked into: (i) the sensitivity of the data sets or inferences at stake; (ii) the temporary or definitive nature of the effects on data subjects (or their immediacy); (iii) the effects they would have on the drivers’ conduct or choices; and (iv) the seriousness of the financial impacts potentially involved for individuals. Factors such as these could be codified into Article 9 AI Act as valuable guidance for HRAIS providers’ risk management exercises.
Incorporating human oversight does not rule out the Article 22 GDPR prohibition: a question arises about whether the Article 14 AI Act requirements for the provider to set up human oversight tools for HRAIS would bring such systems outside of Article 22 GDPR. The answer is ‘not necessarily.’ While the prohibition in Article 22 GDPR may apply to AI systems that are not considered ‘high-risk’ under the AI Act, when HRAIS are indeed at stake, Article 14 AI Act only requires providers to incorporate features that enable human oversight, but not to ensure human oversight as a default.
That will be on the user of HRAIS (i.e., most likely the controller under the GDPR) to ensure via organizational arrangements. If the latter does not, then it may be in breach of Article 22 GDPR, if its ADM scheme is covered by the prohibition. Moreover, we have learned from the decision of the Portuguese DPA against a university that used proctoring software to monitor its students during exams (case 26 in the Report), and the court cases involving Dutch gun applicants and Austrian jobseekers (cases 8 and 9 in the Report), that merely having a human in the loop with the power and competence to make final decisions does not necessarily mean that the decision will not be considered ‘solely’ automated, and thus, that Article 22 GDPR does not apply. For that, human decision-makers need to receive clear instructions or training about why and when they should follow the AI system’s recommendations or not.
In that respect, the EU legislator could envision that users have an obligation to inform their human decision-makers about the elements listed under Article 14(4) AI Act, so that the latter are able to make informed decisions based on the HRAIS’s output, and avoid so-called ‘automation bias’.
Prohibited AI practices and HRAIS overlap with GDPR enforcement precedents on ADM
In general, the AI Act’s Annex III list of HRAIS seems to be based on litigated uses of AI systems by private and public bodies, including some that were analyzed by courts and DPAs under the GDPR and thereunder deemed to be unlawful for a variety of reasons, as the FPF Report shows. Some examples include:
Cases of biometric identification and categorisation, where DPAs have often found the underlying collection and storage of data, plus the training of the AI system to be in breach of the GDPR’s rules on lawful grounds (e.g., Clearview AI enforcement cases);
Systems that select students for university admissions (case 25 in the Report), assess or test them (cases 7 and 26);
Some uses of AI systems in recruitment processes, which may be justified under the exception in Article 22(2)(a) GDPR (case 3 in the Report);
AI systems used for worker management or for managing ride sharing apps and the service provided by gig workers were the focus of the Italian DPA (cases 3 and 6), and Dutch courts (cases 17 to 19 in the Report);
Usages of AI to manage public services and benefits (cases 9, 20 27, and 32 in the Report), where DPAs agree that strong data quality and bias monitoring requirements are essential;
Automated creditworthiness checks through AI (cases 14, 15, 22, 23, and 36 to 39 in the Report), although Annex III excludes from its scope AI systems that are developed by small scale providers for their own use.
Despite that significant overlap, some AI use cases that were investigated by European DPAs and courts have not been included in the Annex III list, including recommender and content moderation systems (case 24 in the Report), online behavioral advertising systems, and systems used by tax authorities to detect potential fraud (cases 4 and 27 in the Report). Likewise, the EC did not include commercial emotion recognition systems in the HRAIS list, in spite of the recent Hungarian DPA decision against a bank that used an AI system to detect the emotions of the customers who contacted its help center, and the fact that such systems are included in paragraph 6(b) of the Annex when used for law enforcement purposes. These enforcement actions could be an early indicator of the future potential enlargement of HRAIS use cases outlined in Annex III.
Some of these use cases may already be prohibited under the AI Act’s Article 5(1), notably where they would rely on ‘subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior’ and may lead to harm, or where they would constitute “social scoring”. The latter could be the case of the SyRi algorithm, which the Dutch Government used when trying to detect instances of benefits fraud in neighborhoods hosting poor or minority groups, and that both the District Court of The Hague and the Dutch DPA deemed to be unlawful (case 27 in the Report). In any case, some of the concepts under Article 5 AI Act (like ‘detrimental or unfavorable treatment’) could be clarified to make the prohibition more certain for AI providers and users, lest the latter will struggle as much as with Article 22 GDPR’s definition of ‘legal or similarly significant effects.’
The issue of ensuring redress where AI systems not underlined by personal data may significantly affect individuals and communities
It is undeniable that the GDPR provides meaningful rights to individuals who are subject to or affected by AI systems underpinned by processing of personal data. For example, they can access a detailed description of how they were profiled through Article 15, as well as obtain human intervention, express their point of view, and file objections when ‘qualifying ADM’ is at stake. Additionally, providers and users of AI systems have a priori obligations when they act as ‘controllers’ under the GDPR, to implement data minimization, storage limitation, purpose limitation, confidentiality and, most relevant, fairness requirements in how they collect and use personal data, just to name a few of their obligations.
However, when ‘qualifying ADM’ is not at stake, ‘controllers’ do not have an obligation to ensure human intervention or the possibility to contest an AI-powered decision. This is a gap that could potentially be tackled through the AI Act.
On this note, although the AI Act is largely not rights-oriented, Article 52 AI Act on disclosure duties for certain AI systems creates a precedent for enshrining judiciable rights for individuals who are exposed to or targeted by AI systems.
On the other hand, the Slovak Constitutional Court ruling (case 4 in the Report) required the local legislature to enshrine redress rights for individuals to effectively defend themselves against errors of the automated system at issue. Such actions could be possible under the broad effective judicial redress provided by Article 82 GDPR, but only where the underlying processing of personal data would be conducted in breach of GDPR rules such as transparency, fairness, and purpose limitation. For those cases where processing of personal data is not involved, but where AI systems may still significantly impact individuals, the AI Act could potentially fill a redress gap.
Lessons from the ADM GDPR case law in the AI Act context
Some of our findings in the ADM GDPR Case-Law Report are useful for the debate around the AI Act, as shown above. Putting them in the context of the European Commission’s Proposal to regulate AI and the ensuing legislative process, we found that:
The AI Act and the GDPR are bound to work in tandem – they are both grounded on Article 16 TFEU and they have many areas where they complement each other, as well as areas where they could be better coordinated so that both their goals are achieved. For instance, the obligations for ‘users’ and ‘providers’ under the AI Act could be further attuned to the GDPR, by enhancing and clarifying transparency requirements of ‘providers’ towards users or by including a general reference to best efforts of providers to facilitate GDPR compliance where ‘users’ are deemed as ‘controllers’.
Data accuracy and data quality requirements under the GDPR could be strengthened with cross-references in the AI Act for HRAIS, such as in Article 29.
Further safeguards for the processing of sensitive personal data to counter bias in AI systems could be laid out more clearly for the purpose of protecting fundamental rights.
Obligations to train ‘humans in the loop’ on what criteria they should take into account to rely on or to overturn a decision resulting from the application of a HRAIS could complement the protections the GDPR affords through Article 22.
The existing and future GDPR case-law on ADM may be a source for the future enhancement of the HRAIS list provided in Annex III of the AI Act, considering that most of the existing HRAIS list overlaps with ADM systems that were already subject to GDPR enforcement.
While the GDPR right to an effective judicial redress can potentially cover most situations where AI systems relying on or resulting in personal data infringe principles like fairness, transparency, and purpose limitation, there is room to consider options for further redress, such as in situations where AI systems rely on non-personal data and significantly affect the rights of individuals or communities.
Understanding Extended Reality Technology & Data Flows: XR Functions
This post is the first in a two-part series on extended reality (XR) technology, providing an overview of the technology and associated privacy and data protection risks.
Click here for FPF’s infographic, “Understanding Extended Reality Technology & Data Flows.”
I. Introduction
Today’s virtual (VR), mixed (MR), and augmented (AR) reality environments, collectively known as extended reality (XR), are powered by the interplay of multiple sensors, large volumes and varieties of data, and various algorithms and automated systems, such as machine learning (ML). These complex relationships enable functions like gesture-based controls and eye tracking, without which XR experiences would be less immersive or unable to function at all. However, these experiences often depend on sensitive personal information, and the collection, processing, and transfer of this data to other parties may pose privacy and data protection risks to both users and bystanders.
This blog post analyzes the XR data flows that are featured in FPF’s infographic, “Understanding Extended Reality Technology & Data Flows.” This post focuses on some of the functions that XR devices support today and may support in the near future, analyzing the kinds of sensors, data types, data processing, and transfers to other parties that enable these functions. The next blog post will identify some of the privacy and data protection issues that XR technologies raise.
II. XR Functions
XR devices use several sensors to gather personal data about users and their surroundings. Devices may also log other types of data: data about a person’s location when the device connects to GPS, cell towers, or other connected devices around it; data about the device’s hardware and software; and usage and telemetry data. Devices utilize this data and may further transfer it to enable a variety of functions, which are the technologies that power use cases. For example, eye tracking is a function that enables the use case of optimized graphics.
A. Mapping and Understanding the User’s Environment
Sensors on XR devices may work in tandem to collect various kinds of data—such as surrounding audio, the device’s acceleration, orientation, and environment depth data—to map and find objects in the user’s physical environment. Mapping the space entails constructing three-dimensional representations of the user’s environment in order to accurately situate users and content within a virtual space. Understanding the user’s environment involves identifying physical objects or surfaces in the user’s space to help place virtual content. These functions may enable shared experiences and other use cases.
To map and identify objects in the user’s environment, XR devices collect data across a number of sensors, such as microphones, cameras, depth sensors, and inertial measurement units (IMUs), which measure movement and orientation. When a sensor is experiencing a performance problem or certain sensor data is not available, the device may utilize other sensors, which may implicate a less accurate data proxy or fallback. For example, if photons from a depth sensor fail to indicate a user’s position, the device may use an AI system to fill in the sensory gap using pixels closest to the area where the depth sensor directed the photons.
Once the device has gathered data through its sensors, the device and XR applications may need to further process this data to map and identify objects in the user’s physical space. The kind of processing activity that occurs depends on the features a developer wants its application to have. A processing activity that often occurs after a device collects sensor data is sensor fusion, in which algorithms combine data from various sensors to improve the accuracy of simultaneous localization and mapping (SLAM) and concurrent odometry and mapping (COM) algorithms. SLAM and COM map users’ surrounding areas, including the placement of landmarks or map points, and help determine where the user should be placed in the virtual environment. Some types of XR, including certain MR applications, leverage computer vision AI systems to identify and place specific objects within an environment. These applications may also use ML models to help determine where to place “dynamic” virtual content—virtual objects that respond to changes in the environment caused by adjustments to the user’s perspective. These mapping and object identification functions may also allow for shared experiences. For example, in a theoretical pet simulation, multiple users could toss a virtual ball against a building wall for a virtual puppy to catch.
While XR devices generally do not send mapping and environmental sensor data to other parties, including other users, there are a few exceptions. For example, raw sensor data may be transmitted to XR device manufacturers to improve existing device functions, such as the placement and responsiveness of virtual content that users interact with. An XR device may also process and relay users’ location information, such as approximate or precise geolocation data, to enable shared experiences within the same physical space. For instance, two individuals in a public park could interact with each other’s virtual pets in an AR app, with each player using their own devices that recognize the placement of both the virtual pets and the other player in the park. In other situations, certain parties can observe processed sensor and other data, such as an application developer or an entity controlling the external server that enables an application’s multiplayer functionality. Therefore, the nature of the data, device and application features may affect who can access XR data.
B. Controller and Gesture-Based Interactions with the Environment
Some XR technologies gather and process sensor data to enable controller- and gesture-based interactions with physical and virtual content, including other users. Gesture-based controls allow users to interact with and manipulate virtual objects in ways that are more reflective of real-world interactions. Most devices use IMUs and outward-facing cameras combined with infrared (IR) or LED light systems to gather data about the controller’s position, such as the controller’s linear acceleration and rotational velocity, as well as optical data about the user’s environment. Some manufacturers are introducing new data collection systems that overcome other methods’ deficiencies, such as when the controllers are outside of the cameras’ view. When visual information about a controller’s position becomes unavailable, IMU data may act as a fallback or proxy for determining controller location. For gesture-based controls, devices gather data about the user’s hands through outward-facing cameras.
XR technologies use algorithms and ML models to provide controller- and gesture-based controls. In controller-based systems, algorithms use data about the controller’s position to detect and measure how far away the controllers are from the user’s head-mounted display (HMD). This allows the user’s “hands” to interact with virtual content. For example, MR or VR maintenance training could allow a mechanic to practice repairing a virtual car engine before performing these actions in the real world. Gesture-based controls utilize ML models, specifically deep learning, to construct 3D copies of the user’s hands by processing images of their physical-world hands and determining the location of their joints. The 3D copies may be sent to developers to enable users to manipulate and interact with virtual and physical objects in applications through pointing, pinching, and other gestures.
C. Eye Tracking and Authentication
Eye tracking and, to a lesser extent, authentication power a variety of XR use cases, such as user authentication, optimized graphics, and expressive avatars. An XR device can use data from the user’s eye to authenticate the person using the device, ensuring that the right user profile, with its unique settings, applies during a session. Devices may use inward-facing IR cameras to gather information about the user’s eyes, such as retina or iris data, to this end. ML models can then use the collected eye data to determine whether the person is who they claim to be.
Now and in the future, XR devices will increasingly feature eye tracking to optimize graphics. Graphics quality can affect a user’s sense of immersion, presence, and embodiment in XR environments. One technology that can enhance graphics in XR environments is dynamic foveated rendering, or eye-tracked foveated rendering (ETFR), which tracks a user’s eyes to reduce the resolution that appears in the peripherals of the HMD’s display. This allows the device to display the user’s focal point in high resolution, reduce processing burdens on the device, and lower the chance of motion sickness by addressing a cause of latency. XR devices may also facilitate better graphics by measuring the distance between a user’s pupils or interpupillary distance (IPD), which affects the crispness of the images on the headset display. For example, in a virtual car showroom, a device may utilize the above technologies to enhance the detail of the car feature that the user is looking at and ensure that objects appear at the correct scale.
To determine the parts of the HMD display that should be blurred and help focus the lenses, a device may use inward-facing IR cameras to gather data about the user’s eyes. Some XR devices may use ML models, such as deep learning, to process eye data to predict where a user is looking. These conclusions inform what parts of the display the model blurs. In the future, XR devices may use algorithms to more accurately measure the distance between a user’s pupils, further improving the crispness of the graphics that appear on the HMD’s display.
Eye tracking is a subset of a broader category of XR data collection—body tracking. Body tracking captures eye movements (described above), facial expressions, and other body movements, which can help create avatars that reflect a user’s reactions to content and expressions in real-time. Avatars are the person’s representative in a virtual or other artificial environment. Avatars are already featured in popular shared VR experiences, such as VRChat and Horizon Worlds, but several factors limit their realism. Today’s avatars typically do not reflect all of a user’s nonverbal responses and may lack certain appendages, like legs. Going forward, an avatar may mirror a user’s reactions and expressions in real-time. This will enable more realistic social, professional, and personal interactions. For example, in a VR comedy club, a realistic avatar may display facial and other body movements to more effectively deliver or react to a punchline.
To depict a user’s reactions and expressions on their avatar, XR technologies need data about the eyes, face, and other parts of the user’s body. A device may use IMUs and internal- and outward-facing cameras to capture information about the user’s head and body position, gaze, and facial movements. XR devices may also use microphones to capture audio corresponding with certain facial movements, known as visemes, as a proxy for visuals of the user’s mouth when the latter is unavailable. For instance, the sound of laughter may cause an avatar to show behavior associated with the act of laughing.
As with the other functions, body-based insights on XR devices may need to transmit data to other parties, which may use algorithms to process collected data to create expressive avatars. XR devices may transmit data about a user’s avatar, such as gaze and facial expression information to app developers. Developers may use ML models, including deep learning, to process information about the user’s eyes and face to detect the face and make conclusions about where a user is looking and the expression they are making. For facial movements, a deep learning model may analyze each video frame featuring the user’s face to determine with which expressions their facial movements correspond. These expressions are then displayed on the avatar. Devices may then share the avatar with central servers so that other users can view and interact with the avatar.
In addition to avatar creation and operation, future XR technologies may monitor gaze and pupil dilation, motion data, and other information derived from the user’s body to generate behavioral insights. XR tech may be capable of using sensor data that is generated in response to stimuli and interactions with content to make inferences about user interests, as well as their physical, mental, and emotional conditions. When combined with information processed by other sensors, such as brain-computer interfaces (BCIs), these body-derived data points could contribute to the creation of more granular individual profiles and insights into the user’s health. In a medical XR application, for example, doctors could use gaze tracking to diagnose certain medical conditions. However, other parties may use the functions to learn about other highly sensitive information, such as a user’s sexual orientation, which could harm individuals.
III. Conclusion
XR technologies often rely on large volumes and varieties of data to power device and application functions. Devices often feature several sensors to gather this data, such as outward-facing cameras and IMUs. To enable different kinds of XR use cases, including shared experiences, devices may utilize ML models that process data about the user and their environment and transmit this data to other parties. While the collection, processing, and transmission of this data may be integral to immersive XR experiences, it can also create privacy and data protection risks for users and bystanders. The next blog post analyzes some of these risks.
Federal Court deems university’s use of room scans within the home unconstitutional
I. Summary
A federal court recently ruled that a public university’s use of room-scanning technology during a remotely proctored exam violated a student’s Fourth Amendment right to privacy. The decision in Ogletree v. CSU is the clearest indication to date of how courts will treat Fourth Amendment challenges to public higher education institutions’ use of video room scans within students’ homes. Schools, test administrators, and professional licensure boards often use proctoring technologies in an effort to dissuade cheating by remote test takers. These technologies take a variety of forms and may involve live proctors observing test takers via webcam, eye-tracking technology, artificial intelligence, recording via webcam and microphone, plug-ins that disable a test taker’s computer from accessing third-party websites or stored materials, and room scans. At issue in this case was a room scan of a student’s bedroom workspace.
Since the start of the COVID-19 Pandemic, more schools have incorporated remote proctoring software into testing procedures. The increased use of such technology in both K-12 and higher education settings has led to widespread discussion about the resulting privacy implications, including whether remote proctoring practices violate students’ privacy rights. In August, the US District Court for the Northern District of Ohio offered some clarity–as well as new questions–when it granted summary judgment to college student Aaron Ogletree (“the student”), in his Fourth Amendment lawsuit against Cleveland State University (“CSU,” or “the university”). The Court determined that the room scan amounted to a Fourth Amendment search because: (1) CSU is a public institution and thus a state actor; (2) the student had an intuitive expectation of privacy within the bedroom of his home; and (3) the student’s expectation of privacy was reasonable and one generally accepted by society. The Court further found that CSU’s Fourth Amendment search was unreasonable by weighing four factors: (1) the student’s privacy interest; (2) the nature of the search; (3) the government concern; and (4) efficacy. Finding only one factor (the government concern) weighed in favor of CSU, the Court deemed the search unreasonable and thus unconstitutional.
While the Court’s decision is not dispositive of many interesting issues, it offers clarity on some and poses new questions about others. Some of the takeaways from the decision include:
Going forward, courts are likely to be skeptical of public higher education institutions conducting room scans within a student’s home absent a warrant.
Although nonpublic actors are not directly implicated, the Court’s decision may lead to broader critiques of room scans. These critiques may influence new norms for institutions such as private universities and nonpublic professional licensure boards.
This decision calls into question the lawfulness of public institutions’ use of other proctoring features beyond room scans.
Several important questions about what this decision will mean for remote proctoring and student privacy remain unanswered.
II. Analysis
The student in this case sued his university after he was asked to complete a room scan of his bedroom workspace before a remote exam, alleging that the practice violated his Fourth Amendment rights. The Fourth Amendment of the United States Constitution states:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
The Court’s opinion first determines that CSU’s room scan amounted to a Fourth Amendment search, and second, rules that the search was unreasonable. While Fourth Amendment decisions are highly fact-specific, the Court’s analyses of these two factors indicate how other courts may evaluate similar cases in the future.
The Room Scan was a Fourth Amendment Search
In the first stage of its analysis, the Court determined that the room scan did in fact amount to a Fourth Amendment search. Fourth Amendment searches involve government action that violates “a subjective expectation of privacy that society recognizes as reasonable.” Because CSU is a public institution, it is a government actor. As such, the Court had to determine whether the student possessed a subjective expectation of privacy when he took the remote test, and if so, whether his expectation of privacy was one reasonably recognized by society. The Court determined both of these elements were met.
Much of the Court’s analysis hinges on the location where the room scan took place: the student’s bedroom within his home. The Court acknowledged that privacy within the home is a cornerstone of the Fourth Amendment jurisprudence and that there is little question as to whether the student had a subjective expectation of privacy in his bedroom. The Court further found this expectation to be one reasonably understood by society as there is widespread agreement that privacy interests exist within one’s home.
In arriving at its conclusion that a Fourth Amendment search occurred, the Court rejected multiple arguments from CSU. For example, since proctoring services commonly include room scans and students routinely use them, the university argued that the student’s expectation of privacy was unreasonable. The Court was not persuaded by this argument, noting that a practice can be commonly used but still implicate a privacy interest. A lack of opposition from other students does not invalidate the inherent expectation of privacy that exists in the home. The Court explained:
Though schools may routinely employ remote technology to peer into houses without objection from some, most, or nearly all students, it does not follow that others might not object to the virtual intrusion into their homes or that the routine use of a practice such as room scans does not violate a privacy interest that society recognizes as reasonable, both factually and legally.
CSU also cited case law regarding routineness and plain view. Here, the university attempted to equate room scans to activities that courts have declined to characterize as Fourth Amendment searches. Specifically, these precedents involve routine practices that employ modern technologies to observe what is openly visible. Once again, the Ogletree Court rejected the university’s arguments, holding: “[r]oom scans go where people otherwise would not, at least not without a warrant or invitation.” As such, the routine use of room scans for remote exams does not make the student’s expectation of privacy in his bedroom unreasonable.
The Court also rejected the notion that a room scan is not a Fourth Amendment search simply because the technology that room scans use is publicly available. Here, the opinion notes:
While cameras might be generally available and now commonly used, members of the public cannot use them to see into an office, house, or other place not publicly visible without the owner’s consent.
Moreover, the Court was not persuaded by the university’s invocation of Quon or Wyman – two Supreme Court cases that involved arguably similar searches. While the Ontario v.Quon decision involved similar facts and found a government search to be lawful, the case centered around employee monitoring software, not remote proctoring room scans. The Ogletree Court declined to apply Quon and subsequent case law beyond an employment context. The Court also engaged in a lengthy discussion about the precedent from Wyman v. James.Wyman is a Supreme Court case finding that mandatory home inspections to qualify for government benefits are not Fourth Amendment searches. The Court examined many factors in its analysis of Wyman’s applicability to the case at hand. Ultimately, however, the Court determined that the circumstances in the Wyman case were fundamentally different from the Ogletree facts and could not be equated to support the university’s argument.
The room scans at issue in this case can be characterized as suspicionless searches, in that they were not conducted because of specific suspicion of a single student. While suspicionless Fourth Amendment searches are generally unconstitutional, the Ogletree Court acknowledged the exception for searches in which the government has “special needs beyond the normal need for law enforcement.” The test for this exception–and the test the Ogletree Court used in its analysis–considers four factors:
The nature of the privacy interest affected;
The character of the intrusion;
The nature and immediacy of the government concern; and
The efficacy of this means of addressing the concern
Factor 1: The Privacy Interest
For the first factor, the Court reiterated the well-understood privacy interest that existed within the student’s bedroom. The high value that society and Fourth Amendment jurisprudence place on privacy within the home worked in favor of the Plaintiff in this case. CSU argued that students have different Fourth Amendment rights in a school setting given the unique custodial relationship that exists. Relying on case law involving public K-12 institutions, CSU suggested that the student had lesser privacy interests at the time of the room scan. The Court rejected this assertion, however, pointing out that CSU’s argument rested on precedent involving minor students whose school attendance is required. Here, the student was an adult who voluntarily enrolled in a higher education institution. The Court made this distinction and explained:
Mr. Ogletree was an adult at the time of the search at issue and enrolled at Cleveland State by choice. Although this setting might affect the nature of the privacy interest at stake to some degree, it is difficult to see how enrollment in a higher educational institution would limit the core protections of the home under the Fourth Amendment on the facts and circumstances of this case.
Factor 2: The Intrusion
For the “character of the intrusion” factor, the Court relied heavily on factors from the case record. The Court discussed the lack of alternatives the student had when taking the test given the COVID-19 Pandemic and his inability to take the exam anywhere other than his bedroom. The Court noted that even before the Pandemic, it would be difficult for students to weigh their privacy interests when deciding on a testing location given the university’s policy of leaving remote testing decisions to the discretion of individual instructors. Here, the Court acknowledged:
In normal times, a student might be able to choose another college or among classes with different options for tests and assessments. A student who valued privacy more might opt for courses with in-person tests, while another who prefers convenience might tolerate an intrusion of the sort at issue here. Cleveland State’s policies and practices make such choices and tradeoffs opaque, at best. Faculty members have discretion on how to implement remote testing.
The Court further pointed out that the student only had two hours of notice of the room scan because of policy changes.
The Court’s analysis for this part of the four-part test also included factors that favored CSU, including the fact that the room scan was minimally invasive in that it lasted less than a minute and was in the student’s control. Moreover, the Court acknowledged that some privacy interests might be traded away for the exchange of a good or service, such as an education or degree. However, the Court maintained that regardless of any tradeoffs, the student kept his constitutional rights. When weighing the factors that favored the student against the factors that favored the university, the Court ultimately concluded that “the core protection afforded to the home, the lack of options, inconsistency in the application of the policy, and short notice of the scan weighed in Plaintiff’s favor.”
Factor 3: The Government Concern
The third factor, “nature and immediacy of the government concern” weighed in the favor of CSU as the parties and the Court agreed that the university had a legitimate interest in preventing academic dishonesty. The room scans were ultimately employed to help meet this interest.
Factor 4: Efficacy
The Court then turned to the fourth and final prong of the test: the efficacy of the school’s means to address its concern. For this factor, in particular, it is important to remember that this case deals with one form of proctoring: room scans. Here, the Court considered the alternative options that existed to achieve the university’s goal of deterring academic dishonesty.
In his argument against the effectiveness of room scans, the student pointed out that the school has many proctoring methods at its disposal, including technology that prevents a test taker from accessing the internet or saved documents during an exam, hiring proctors to monitor students during the duration of a test, and AI detection for plagiarism. In support of his argument about the various alternatives that existed, the student pointed out that the university’s policy left proctoring methods to the discretion of individual educators. The student also argued against the efficacy of room scans by discussing different ways a student who is required to complete a room scan before an exam could still access prohibited materials during the testing period.
In contrast, the university argued that a room scan is an effective method to achieve the university’s interests in preventing academic dishonesty. To support its argument that other proctoring features do not offer the same detection and deterrent functions that room scans do, the university suggested that such programs “are not effective at achieving these functions and that sometimes they are inappropriate for students with disabilities.” Here, the university’s argument seemed to hinge on the ineffectiveness of other methods of remote proctoring.
The Court was ultimately persuaded by the student’s arguments against efficacy and concluded that “a record or sporadic and discretionary use of room scans does not permit a finding that room scans are truly, and uniquely, effective at preserving test integrity.” Not only did other safeguarding methods exist, but the Court also pointed out the existence of alternative evaluation methods–such as a final project or term paper–that do not require remote proctoring at all. This section of the Court’s analysis is especially interesting given the efficacy critiques that often arise in the public discourse surrounding remote proctoring technology.
As three of the four factors (nature of the privacy interest affected, character of the intrusion, and efficacy of means) weighed in favor of the student, and only one of the factors (nature and immediacy of the government concern) weighed in favor of the university, the Court concluded that the Fourth Amendment search was not reasonable.
Having determined that the room scan amounted to a Fourth Amendment search and that the search was unreasonable, the court found the Plaintiff’s Fourth Amendment rights had been violated.
The Court’s decision to grant Ogletree’s motion for summary judgment is the clearest indication to date of how federal courts may treat Fourth Amendment cases involving the use of room scans in remote proctoring software. Nonetheless, it is too soon to tell whether other courts will follow suit or what this decision will mean for remote proctoring generally. Regardless, schools that use remote proctoring software, and more specifically, deploy room scan features, should be mindful of the decision in this case and the Court’s reasoning.
Fourth Amendment cases are especially fact dependent. As such, it is very possible that the case could have had a different result had the circumstances been even slightly different. For example, because of the unique relationship between schools and K-12 pupils, there remains ambiguity as to whether the Court would have arrived at the same result in a case about an elementary or high school student. Moreover, this case focused specifically on room scans; it is unclear whether other forms of remote proctoring, such as ongoing monitoring when a student takes a remote test in their home, would amount to a Fourth Amendment violation under this Court’s efficacy and reasonableness analyses. Nonetheless, this case is a win for student privacy and an indicator of how other courts may rule in future cases.
Regardless of the questions that still exist, interested parties–including schools, students, boards of licensure, and proctoring companies–should be aware of this decision. Entities that employ proctoring software should be mindful of the Court’s reasoning and consider potential legal risks and privacy implications before employing proctoring technologies or requiring room scans within the home.
New Infographic Highlights XR Technology Data Flows and Privacy Risks
As businesses increasingly develop and adopt extended reality (XR) technologies, including virtual (VR), mixed (MR), and augmented (AR) reality, the urgency to consider potential privacy and data protection risks to users and bystanders grows. Lawmakers, regulators, and other experts are increasingly interested in how XR technologies work, what data protection risks they pose, and what steps can be taken to mitigate these risks.
Today, the Future of Privacy Forum (FPF), a global non-profit focused on privacy and data protection, released an infographic visualizing how XR data flows work by exploring several use cases that XR technologies may support. The infographic highlights the kinds of sensors, data types, data processing, and transfers that can enable these use cases.
XR technologies are powered by the interplay of multiple sensors, large volumes and varieties of data, and various algorithms and automated systems, such as machine learning (ML). These highly technical relationships enable use cases like shared experiences and expressive avatars. However, these use cases often depend on information that may qualify as sensitive personal data, and the collection, processing, and transfer of this data may pose privacy and data protection risks to both users and bystanders.
“XR tech often requires information about pupil dilation and gaze in order to function, but organizations could use this info to draw conclusions—whether accurate or not—about the user, such as their sexual orientation, age, gender, race, and health,” said Daniel Berrick, a Policy Counsel at FPF and co-author of the infographic. These data points can inform decisions about the user that can negatively impact their lives, underscoring the importance of use limitations to mitigate risks.
FPF’s analysis shows that sensors that track bodily motions may also undermine user anonymity. While tracking these motions can help map a user’s physical environment, it can also enable digital fingerprinting. This makes it easier for parties to identify users and bystanders while raising de-identification and anonymization concerns. These risks may discourage individuals from fully expressing themselves and participating in certain activities in XR environments due to their concerns about retaliation.
Moreover, FPF found that legal protections for bodily data may depend on privacy regulations’ definitions of biometric data. It is uncertain whether US biometric laws, such as the Illinois Biometric Information Privacy Act (BIPA), apply to XR technologies’ collection of data. “BIPA applies to information based on ‘scans’ of hand or face geometry, retinas or irises, and voiceprints, and does not explicitly cover the collection of behavioral characteristics or eye tracking,” said Jameson Spivack, Senior Policy Analyst, Immersive Technologies at FPF. Spivack was also a co-author of the infographic.
This highlights how existing laws’ protections for biometric data may not extend to every situation involving XR technologies. However, protections may apply to other special categories of data, given XR data’s potential to draw sensitive inferences about individuals.
Meet David Sallay, FPF’s new Youth & Education Privacy Director
FPF is thrilled to announce the new Director of our Youth & Education Privacy Program, David Sallay. David comes to FPF from the Utah State Board of Education, where he previously served as the Chief Privacy Officer and the Student Privacy Auditor at the Utah State Board of Education, where he worked with schools and districts on implementing Utah’s state student privacy law.
Before focusing on privacy, he worked in education as a teacher of English as a Foreign Language at Qatar University and high schools in Hungary. He holds a Master’s in Public Policy from the University of Utah and a Master’s in Education from the University of Pittsburgh.
Learn more about David in the Q&A below.
You started out as a teacher–how did you get involved in student privacy?
I was in the right place at the right time. I worked for the Utah State Board of Education as the assessment data specialist when Utah passed its student privacy law, which created several new positions. It seemed like an exciting new area to work in, so I applied and got the job.
What’s one thing that you wish more teachers understood about student privacy?
I wish more teachers knew that student privacy doesn’t have to be a zero-sum game. We shouldn’t have to make trade-offs where you can’t teach as well or use helpful tools in the classroom to achieve security or privacy. Still, since privacy is unfortunately rarely the default setting, we’ll want to work together to ensure our use of technology and data in the classroom is a true win-win where students learn better while having their data protected.
A lot is going on in youth & education privacy today…What’s one thing that you are optimistic or encouraged about?
I am encouraged by the move to age-appropriate design codes in some jurisdictions since it should lead to more products mapping better to the expectations of parents, educators, and children.
What’s one thing that you are worried or concerned about?
On the flip side, I am also worried about age-appropriate design codes if they don’t strike the right balance, and it can be really hard to find that Goldilocks zone of just right.
What’s something that you think is flying under the radar?
Many of the most damaging privacy harms are small in terms of the number of students they impact and likely won’t appear on the front page of a newspaper. If you see a lot of the privacy complaints that go to the US Department of Education or that we investigated in Utah, they were very often one individual at the school doing something with a record that harmed one or two students. So I think it’s important to look at both big-picture issues as well as the smaller ones.
Utah’s approach to student privacy has been held up as a model for other states to follow (including by FPF!). What advice do (or would) you give another state looking to replicate your approach?
One of the things I think our legislature and state board did well was gathering stakeholders together to study the issue instead of adopting a one-size-fits-all approach. The other thing is to put someone in charge full-time (e.g., a chief privacy officer) to provide adequate support.
What are you reading/listening to lately for work (related to youth privacy)?
I’ve been reading Privacy’s Blueprint by Woodrow Hartzog since it focuses a lot on design as a way to build trust and will hopefully help me better wrap my brain around the new design codes being proposed.
What are you reading/listening to lately for fun?
I’m really drawn to anything about other Hungarian-Americans. Since it’s October, I’m reading a biography of Béla Lugosi that is interesting (there was a lot more to him than just playing Dracula). Also, every October, I try to listen to every Oingo Boingo album since that gets me in a proper Halloween mood.
Do you remember the first time you heard about FPF? What made you want to join the team officially? What is your top priority?
I first heard about FPF when I started working on privacy in Utah. We didn’t know how to build our program and quickly discovered FPF as a resource. At that time, the FPF youth and education team was just one person, so it’s been really neat to see the program grow. A lot of the appeal in joining was being able to work on privacy problems beyond the borders of Utah and beyond education (i.e., in youth privacy). So far, it’s been really good to meet everyone on the team. My top priority right now is to get to know the team better and understand their interests and goals so I can figure out the best way to support them so that collectively we can provide the most value for our stakeholders and ultimately make a real impact for students and youth.
Interested in learning more about FPF’s Youth & Education Privacy work? Visit Student Privacy Compass to learn more.
Indonesia’s Personal Data Protection Bill: Overview, Key Takeaways, and Context
The authors thank Zacky Zainal Husein and Muhammad Iqsan Sirie from Rajah & Tann Indonesia for their insights.
Overview
On September 20, 2022, Indonesia’s House of Representatives passed the Personal Data Protection Bill (PDP Bill) (note: linked Bill is in Indonesian). This is the first step towards enactment of the PDP Bill as law. The second step was Presidential assent, which happened on October 17, 2022, and signifies the enactment and coming into force of the law.
Prior to the passage of the PDP Bill (from hereon referred to as the “PDP Law”) (Act No. 27 or 2022), Indonesia lacked a comprehensive personal data protection law. Instead, provisions on personal data protection were distributed across more than 30 different laws and regulations. A first draft of the PDP Law was released for public comment on January 28, 2020. Between January 2020 and September 2022, the PDP Law underwent numerous rounds of consultation and amendment, culminating in the release of a near-final draft on September 5, 2022, and a final draft on September 20, 2022.
The PDP Law establishes responsibilities for the processing of personal data and rights for individuals in a manner similar to other international data protection laws. Many of its core aspects, including definitions of covered data and covered entities, lawful grounds, processing obligations, accountability measures, and controller-processor relationships, share some overlap with other laws around the world – most notably the EU’s General Data Protection Regulation (GDPR). However, there are a few notable components unique to the Indonesian context. For instance, the PDP Law includes a broad exterritorial scope provision that will apply to organizations as long as their processing activities have legal consequences in Indonesia or cover Indonesian citizens outside of Indonesia.
Additionally, the PDP Law broadly exempts the financial services sector, imposes stricter requirements on controllers such as broad record-keeping obligations for processing activities, and has unique provisions on the use of facial recognition technologies. Special categories of data (what the PDP Law refers to as “specific personal data”) explicitly include children’s data and personal financial data. For specific data subject requests, such as access, rectification, and restriction, organizations only have 72 hours to respond.
Data localization, which was introduced in a previous draft, has been replaced by the general obligation for controllers to ensure data transferred across borders remains protected to a standard commensurate with the PDP Law. As for enforcement and sanctions, the PDP Law includes a large spectrum of avenues – from a private right of action for any violations of the law, to administrative fines and criminal penalties. For instance, the law sanctions “intentionally creating false data” with a criminal sentence of up to six years.
Lastly, the structure and function of the data protection authority (DPA), which will be set up after the PDP Law comes into force, may carry unique features, as many details of its operation will be issued at a later date.
While authorities will need to clarify key provisions in subsequent regulations, the PDP Law creates a comprehensive foundation to govern data processing activities in Indonesia. As Indonesia is one of the largest countries in the world, the PDP Law will likely have an impact on data protection both in the regional context of the Asia-Pacific and the global context. Organizations will have a two-year transition period to comply (except for the criminal provisions that will come into force immediately) once the PDP Law goes into effect, which will occur when it receives Presidential assent or when the time window for receiving assent expires.
The PDP Law applies to persons, public bodies, and international organizations that process personal data or otherwise perform legal acts recognized under the law in the jurisdiction of Indonesia (Art 2). Persons refer to both natural individuals and corporations (natural and legal persons), while public bodies are organizations that fulfill core administrative functions and receive some funds from state budgetary agencies. Non-governmental organizations (NGOs) may also be considered public bodies if part or all of their funds come from the state. International organizations refer to bodies that are recognized as subjects of international law and have the capacity to make international agreements.
Like other data protection laws inspired by the GDPR, the PDP Law applies extraterritorially to covered actors outside of Indonesia (Art 2). However, unlike other laws, this extraterritorial effect applies as long as the processing of personal data has legal consequences (i) in Indonesia or (ii) for personal data subjects of Indonesian citizens outside of Indonesia. This applicability covers more processing activities than typically seen in other data protection frameworks.
Similar to other data protection laws, the PDP Law distinguishes between “Personal Data Controllers” and “Personal Data Processors.” “Controllers” refer to any person, public body, or international organization acting individually or together to determine the purpose and exercise control of personal data processing. Article 1 defines a processor as the party that processes personal data on behalf of the controller.
Much like other data protection laws, the PDP Law requires processors to perform the processing based on an agreement with the controller under its supervision. However, the PDP Law leaves the ultimate responsibility for data processing with the controllers unless processing occurs outside the agreement, in which case it is the responsibility of the processor. Notably, some obligations of the controllers extend to processors following specific provisions in the PDP Law (see Section 5).
Article 51(4) explicitly permits processors to engage other organizations in sub-processing arrangements – but requires that they obtain written consent from the controller before involving other processors. It is unclear if generalized consent to the use of sub-processors would satisfy this requirement, though this may be clarified in forthcoming regulations.
Normative Grounds of the Law and Data Processing
Added in the final draft of the PDP Bill, Article 3 provides normative grounds for processing, as well as indicates the high-level principles policymakers had in mind when promulgating the law. These include a principle of “Protection” (this is clarified in the explanatory section of the PDP Law to mean that every instance of processing of personal data should be carried out by “providing protection to the personal data subject for his/her personal data and the personal data from being misused”), legal certainty, public interest, expediency, prudence, balance, accountability, and confidentiality. The bases provide insight into the enforcement goals of the PDP Law and ground its provisions in specified rationales and objectives.
The PDP Law applies primarily to the processing of personal data, which refers to the “collection, analysis, storage, improvement and renewal, announcement, transfer, dissemination, disclosure, and deletion of data” (Art 16). This definition shares broad congruence with definitions of data processing seen in other laws. Note the law seems to provide a closed list of what constitutes processing and does not include an open reference to information as such or provide examples.
2. Covered Data: Broad definition of “personal data” and novel categories of “specific data”
In the PDP Law, “personal data” is defined broadly and refers to data which, independently or in combination with other data, identifies or can identify an individual, whether directly or indirectly or through electronic or non-electronic systems. Note the Explanatory Memorandum clarifies that this includes both mobile numbers and IP addresses. This definition is similar in scope to equivalent definitions in other major data protection laws internationally, including the definition of “personal data” in Article 4(1) of the GDPR.
Like many global data protection frameworks, the PDP Law distinguishes between personal data of a general nature and categories of sensitive personal data, which the PDP Law terms “specific personal data” and defines as personal data which, if processed, may result in a greater impact (including harm and discrimination) to the personal data subject (Art. 4).
Notably, unlike other personal data protection frameworks, the PDP Law also identifies a number of categories of “personal data of a general nature” which, by definition, would not qualify as specific personal data. These include a person’s full name, gender, citizenship, religion, and marital status, as well as data that is combined with other data to identify an individual.
The categories of specific personal data include:
Health data – defined as individual records or information relating to physical health, mental health, or health services. Note regulators may offer additional clarity to this term in future measures;
Biometric data – defined as an individual’s physical, physiological, or behavioral characteristics that enable unique identification, including facial images, fingerprints, and DNA;
Genetic data – defined as any kind of characteristic of an individual that is acquired during early prenatal development;
Criminal records – defined as written records of a person who has committed or being charged for an unlawful act, including police records;
Children’s data – the law does not specify the age range in which a person is considered a child; and
Personal financial data – includes, but is not limited to, savings, deposits, and credit card data, as well as other data identified in other laws and regulations.
The PDP Law imposes additional safeguards for processing of specific personal data, including mandatory data protection impact assessments (DPIAs) and data protection officers (DPOs) for large-scale processing (see Section 4 below).
3. Lawful Grounds for Processing and Consent Requirements
Article 20 of the PDP Law establishes six legal bases for processing personal data (whether specific or of a general nature), namely:
Consent of the personal data subject to process the data for a specific purpose;
Performance of obligations under a contract between the personal data controller and the personal data subject;
Performance of a controller’s legal obligations;
Protection of a personal data subject’s vital interests;
Undertaking a task in the public interest or in exercise of legal authority; and
Fulfillment of a legitimate interest, taking into account purpose and need of processing, and balancing the interests of the personal data controller with the rights of the personal data subject.
These bases are similar to those in Article 6 of the GDPR and, like their equivalents in that law, are placed on an even level – no single legal basis takes precedence over any of the others.
Consent Requirements
The PDP Law also contains detailed requirements for controllers to demonstrate that they have obtained valid consent. A request for consent must be accompanied by certain prescribed information, clearly distinguishable from other matters, and in a format that is easily understandable and accessible. The consent itself must be explicit, informed, specific to a purpose, and recorded.
The PDP Law also contains specific provisions for consent in several contexts where the personal data subject may lack legal capacity. Consent for processing a child’s personal data must be obtained from the child’s parents or legal guardians. Note the Law does not provide an age for defining a child. Further, consent for processing the personal data of a person with disabilities may be obtained either from the person or from the person’s guardian. The PDP Law recognizes that further requirements for such processing may be found in future regulations.
In addition to requiring a legal basis for processing of personal data, the PDP Law also requires controllers to adhere to enumerated data protection principles. In particular, organizations must process personal data in a limited, specific, transparent, and lawful manner. Additionally, a specific purpose for processing must be identified and communicated to the data subject, and processing must be accurate, secure, transparent, and responsible. Articles 20-49 of the PDP Law provide further details as to how personal data controllers should operationalize these principles (see Obligations of controllers below).
4. Obligations of Controllers
Data controllers must abide by a series of obligations outlined in the PDP Law, including adhering to lawful grounds for processing and notification requirements, following data protection principles, responding to data subject requests, and implementing accountability and security measures.
As an overarching requirement, data controllers must identify an appropriate legal ground for processing personal data. If they rely on consent, further obligations apply (see Section 3 above). Article 21 requires the controller to provide information to data subjects on the legality, the purposes, the type, and the relevance of processing. Additionally, the controller must be able to show that consent is valid (Art 24) and, if withdrawn, end any processing operation in a specified time period (Art 40). If consent is withdrawn, the controller has to also delete the personal data (Art 43).
Data Protection Principles
Controllers must process data in accordance with data protection principles (some of which reflect the Fair Information Practice Principles – “FIPPs”) which outline the following obligations:
Data controllers must process personal data in a limited, specific, lawful, and transparent manner (Art 27).
Data controllers must process personal data in accordance with a stated purpose (Art 28).
Data controllers must ensure the accuracy, completeness, and consistency of the personal data they process (Art 29), including notifying the data subject of any correction they make in response to a request (Art 30).
Organizations must also operationalize the principle of security of the processing (Art 16(2)(e)) through appropriate technical measures (Art 35) and ensure confidentiality of data (Art 36).
Controllers must also ensure accountability by recording all processing operations and taking other measures to demonstrate responsibility of processing (Art 31). Note the obligation to record all processing activities is broader than other data protection laws.
While the Principles are similar to those in other comprehensive data protection laws, including the GDPR and its Article 5, the Law does not have an explicit principle to data minimization. However, a certain correspondence for it can be found in the requirements that personal data must be processed in a limited, specific manner. The list of principles in the PDP Act also misses some form of the principle of fairness.
Data Subject Access Requests
Subject to notable exceptions, controllers must respond to data subject access requests and uphold other data subject rights (see Section 7 below). When a data subject requests access, the controller must give the subject access to the personal data, as well as provide a track record of the processing operations related to the subject (Art 32). With respect to requests to delay or restrict processing, the data controller must notify the data subject of this action (Art 41) unless an exception applies or a written agreement with the subject specifies otherwise. For access, rectification, and delaying requests, the controller has 72 hours from receiving the request to respond to the data subject. Notably, while the right of the data subject to access their own data is provided for in Article 7, the conditions under which access must be provided are listed separately in Chapter VI, which is dedicated to the obligations of the controller.
In cases when the data subject requests to end processing, the processing has reached the retention period, or the purposes have been achieved, the data controller must end the processing operations (Art 42). Additionally, controllers must deleteor destroy personal data if the data subject requests it or has withdrawn consent, when the personal data is no longer necessary for the original purpose of processing, or when controllers process data through unlawful means (Art 43). In both cases of deletion or destruction of personal data, the controller has to notify the data subject (Art 45).
Accountability Measures, DPIAs, and DPOs
Data controllers have additional obligations such as those to supervise each party involved in the processing of personal data that is under the controller’s control (Art 37), notify in writing both the data subject and the DPA in the case of unauthorized disclosure of the data and thus failure to protect it (Art 46), and notify the data subject before the controller (in the form of a legal entity) proceeds with any mergers, separations, acquisitions, consolidations, or dissolutions (Art 48). Finally, data controllers are obliged to implement the DPA’s order in the context of implementing the PDP Law.
Controllers also carry internal reporting obligations, such as the requirement to keep a track record of all processing obligations to facilitate data subjects exercising their rights. Under Article 34, controllers must conduct a data protection impact assessment (DPIA) whenever processing of personal data has a high risk of harming the data subject, which includes:
Automated decision-making that has legal consequences or a significant impact on the data subject;
Processing of specific personal data;
Large-scale processing of personal data;
Processing for systematic evaluation, scoring, or monitoring activities;
Processing for matching activities or merging a group of data;
The use of new technology; and
Processing that restricts the exercise of data subject rights.
Article 53 of the PDP Law also contains obligations for organizations to appoint a data protection officer (DPO) in specified conditions. These include when (i) processing personal data for public services, (ii) the core activities of the controller require regular and systematic monitoring of personal data on a large scale, or (iii) the core activities of the controller consist of large-scale processing for specific personal data or data related to criminal offenses.
The PDP Law does not contain any requirements for choosing DPOs except that they must be a professional and have knowledge of the law. DPOs must advise the controller on compliance, monitor and ensure that processing falls within the ambit of the PDP Law, assess the impact of processing, and act as a contact person for issues related to the processing.
Security and Data Breach Notification
Article 35 specifies security measures organizations must adopt to protect personal data, including preparing and implementing technical, operational measures and employing a risk-based approach to determine the level of appropriate security for data. Controllers likewise have a duty to prevent personal data from being accessed unlawfully (Art 39). Note that the PDP Law does not specify further security measures but instead defers to future regulations to fill out additional detail.
In the event of a security breach, controllers must submit written notification no later than three days to the affected data subject and the DPA. The notice must contain the personal data involved in the breach, when and how the breach occurred, and any remedial measures taken by the data controller to mitigate harm (Art 46). Finally, controllers may have to notify the public of the breach in certain cases. Like other substantive provisions of the PDB Law, future regulations will specify additional information and trigger events.
Exceptions to Processing Obligations
Similar to the case of data subject rights, Article 50 sets the conditions that exemptcertain processing activities from obligations under the law when such activities involve (i) national defense or security interests, (ii) law enforcement, (iii) public interests in the context of state administration, or (iv) the financial services sector, monetary and payment systems, and financial system stability carried out in the context of state administration. This last exception is a unique feature of Indonesian data protection law.
The Explanatory Memorandum provides additional detail as to the circumstances that trigger these conditions. For instance, the law enforcement exception applies primarily to investigation and prosecution processes, while public interests include the implementation of census administration, social security, tax, customs, and licensing services.
While these exceptions may be construed broadly, the PDP Law limits them to the following processing activities in an exhaustive list of specific cases. Note that many of these obligations relate to data subject rights. In the case of certain exempt processing obligations, data controllers are not obliged to:
Update or correct errors and inaccuracies (Article 30);
Provide access to the data subject as well as a track record of the processing operations (Art 32);
Maintain the confidentiality of personal data (Article 36);
Terminate the processing (Art 42);
Delete personal data (Art 43), unless the personal data has been processed by unlawful means, in which case the exception does not apply;
Destroy personal data on the basis of a data subject request (Art 44);
Notify the data subject about the deletion or destruction of the data (Art 45); or
Notify the data subject in the event of a failure of data protection due to disclosure (Art 46).
5. Some Controller obligations extend to Processors
Article 52 attaches a number of data controller obligations to processors as well, including:
Ensuring accuracy, completeness, and consistency of personal data, including “conducting verification” (Art 29);
Recording all processing activities. (Art 31);
Ensuring the security of personal data by implementing appropriate technical and operational measures based on the risk posed by the data (Art 35);
Maintaining confidentiality of personal data (Art 36);
Supervising all parties involved in the processing of personal data (Art 37);
Protecting data from unauthorized processing (Art 38); and
Preventing unlawful access of personal data (Art 39).
Finally, processors share the obligation to appoint a DPO if the processing activity meets the qualifying criteria (described above). Article 53(3) specifically notes that a DPO “may come from inside and/or outside the personal data controller or the personal data processor.”
FPF Training: The EU’s Proposed AI Act
The EU’s Artificial Intelligence (AI) Act is in the final stages of adoption in Brussels, and will be the first piece of legislation worldwide regulating AI. Join us for an FPF Training virtual session to learn about the act’s extraterritorial reach, the legal implications for providers and deployers of AI, and more.
6. Specific Processing Restrictions (Facial Recognition, Children’s Privacy, Persons with Disabilities, ADM)
The PDP Law restricts the processing of personal data in specific circumstances.
Facial Recognition Technology – Article 17 requires controllers that use facial recognition technology or install visual data processing devices in public places to do so only for the purposes of security, disaster prevention, or traffic information analysis. Additionally, organizations must notify the public that such technology is in use in areas where they have installed devices and do not use facial recognition to identify a person. However, these requirements do not apply to the activities of law enforcement or the prevention of criminal offenses.
Children’s Data – Article 25 states that controllers must process children’s personal data in a special manner and obtain the consent of the child’s parent or guardian. Note the law does not specify an age threshold for children. Rather, regulators will likely promulgate rules on children’s data in future regulations.
Persons with Disabilities – Article 26 states that controllers must also process the data of persons with disabilities in a specified manner and obtain the consent of the person or the guardian to conduct processing activities. Additional regulations will specify further conditions, including how and through what means controllers must communicate with persons with disabilities. Note that the law does not define persons with disabilities.
Automated Decision-Making – Article 10 specifies that data subjects have the right to object to ADM, including profiling that gives rise to legal consequences or has a significant impact on the data subject. This language, which mirrors the GDPR, does not seem to be construed as a general prohibition against qualifying ADM. The PDP Law does not define when its use creates legal consequences or carries a significant impact on individuals. The use of ADM may also trigger a DPIA.
7. Nine Data Subject Rights: From Access to Delay of Processing, to Portability
The PDP Law enumerates nine personal data subject rights and obligates controllers to guarantee those rights as a fundamental data protection principle under the law (Arts 5-15). These rights include:
A right to obtain information about the clarity of identity, the basis of legal interests, the purpose of the request and use of personal data, and the accountability of the party requesting personal data (Art 5) This right expresses the ‘principle of transparency’ found under Article 16(2)(a);
The right to access and obtain a copy of the data subject’s personal data free of charge, except for certain conditions that require a fee. (Art 7);
A right to rectification in which the data subject may complete, update, or correct errors and inaccuracies of their personal data (Art 6). This right corresponds to the accuracy principle (Art 16(2)(d));
The right to end processing, delete, or destroy their personal data (Art 8). This right reflects the deletion principle (Article 16(2)(g));
The right to delay or restrict processing (Art 11). Data subjects may only exercise this right in a proportional manner to the original purpose of processing;
The right to withdraw consent in cases where it is provided as a legal basis for processing (Art 9);
The right to object to decision-making measures that are based solely on automated processing, including profiling, and give rise to legal consequences or have a significant impact on the personal data subject (Art 10). The PDP Law illustratively defines profiling as the activity of identifying a person with their employment history, economic condition, health, personal preferences, interests, reliability, behavior, location, or movements electronically;
The right to data portability, which allows the data subject to obtain and use their personal data in a form commonly used or readable by electronic systems as well as send their data to other controllers (Art 13). Subsequent regulations will specify this right further; and
The right to sue and receive compensation in cases where controllers violate the law (Art 12).
Data subjects must submit a registered request to the controller to exercise the rights to rectify data, to have access and obtain a copy of the data, the right to end the processing and delete or destroy personal data, the right to withdraw consent, the right to object to automated decision measures based solely on automated processing, and the right to delay or restrict processing (Article 14).
Similar to general processing obligations, the PDP Law also includes a number of exceptions to the rights (Art 15(1)) (see Section 4 above). While these exceptions kick in under similar conditions, such as for the purposes of national security, law enforcement, or public interests, the PDP also recognizes an exception for statistical and scientific research purposes, which it does not define or further clarify (Art 15). Finally, note that Article 33 stipulates controllers must refuse a rectification or access request if it endangers the security, physical, or mental health of the data subject or other persons.
8. Cross-Border Data Transfers: Possible to jurisdictions with equal or higher level of protection, or on the basis of consent
Article 56 of the PDP Law governs transfers of personal data outside of Indonesia. Similar to other data protection laws with international data transfer requirements, the PDP Law requires controllers to ensure that the country where the data recipient is located has a level of data protection equal to or higherthan the PDP Law.
The PDP Law further requires that controllers, where the law of the recipient country does NOT provide an equal or higher standard, “ensure that there is adequate and binding Personal Data Protection.” The specifics of how this might be achieved are not set forth in the Bill, but Article 56(5) notes that further provisions regarding the transfer of personal data will be included in a separate regulation. It remains to be seen whether this forthcoming regulation will include standardized contractual language or whitelist particular data processing activities such as pseudonymization and encryption for data transfer purposes.
The PDP Law includes a broader consent exception to its “adequacy” requirement than many other laws. Article 56(4) requires organizations to “obtain the consent of the personal data subject” for transfers where neither the destination country’s laws nor the controller can guarantee an equivalent or higher level of data protection to the PDP Law, but does not explicitly restrict the use of this exemption. In contrast, Article 49 of the GDPR and other similar laws expressly limit the circumstances under which a controller may rely on a data subject’s consent to transfer personal information to a non-adequate jurisdiction without “appropriate safeguards” and impose additional transparency requirements on controllers seeking to do so.
9. Enforcement – Data Protection Authority, Processes, and International Cooperation
Articles 58-61 of the PDP Law cover the establishment of the Indonesian data protection authority (DPA) and its roles and responsibilities. While relatively brief, these articles are important for setting out the identity and contours of the Indonesian DPA. Art 58 provides that the DPA will implement the PDP Law and report to the Indonesian President, which will create the institution within the Executive branch of the government. While the PDP Law specifies some of the function, competence, and processes of the DPA, further details will be set in future regulations (Art 58(5)).
The Indonesian DPA will have four key functions: (i) policy, strategy, and guidance formulation; (ii) supervision of the implementation of the PDP Law; (iii) administrative law enforcement against violations; and (iv) facilitating out-of-court dispute resolution. Article 60 specifies the bounds of the Indonesian DPA’s authority and competence, which in broad terms include:
Supervising compliance of data controllers;
Imposing administrative sanctions for violations committed by data controllers and data processors;
Assisting law enforcement officials in handling allegations of personal data-related criminal offenses under the PDP Law;
Cooperating with foreign DPAs to resolve alleged cross-border violations of the PDP Law;
Publishing the results of the implementation of the PDP Law;
Receiving, investigating and tracking complaints and reports about alleged PDP Law violations;
Summoning and presenting experts, where needed, to examine and investigate alleged violations;
Conducting checks and searches of electronic systems, facilities, spaces, and places used by data controllers and data processors, including obtaining access to data and appointing third parties; and
Requesting legal assistancefrom Indonesia’s Public Prosecution Service to settle disputes under the PDP Law.
Further details as to procedures and processes for implementing these powers will be provided in future regulations (Art 61).
Finally, Article 62 stipulates that the Indonesian Government (and not just the Indonesian DPA) will have the ability to conduct international cooperation activities on personal data with other governments and international organizations. Such international cooperation shall be carried out as provided under the laws, regulations, and principles of international law. This indicates that Indonesia will engage with other governments on key data protection issues, including possible negotiations around cross-border data flows and cybercrime.
10. Penalties, Civil Liability, and Criminal Liability
The PDP Law imposes a tiered system for administrative sanctions, including civil and criminal penalties that increase depending on the severity of the penalty. In addition to provisions prohibiting the unlawful collection, use, or disclosure of personal information that may harm data subjects, individuals and organizations must not create false personal data that benefits them at the harmful expense of others.
Administrative Sanctions and Civil Liability
Under the PDP Law, the DPA may issue the following administrative sanctions: (i) a written warning; (ii) temporary suspension of processing activities; (iii) forced deletion of personal data; and/or (iv) administrative fines of a maximum of 2% annual revenue or sales of the data controller. The PDP Law does not stipulate a detailed fine structure for organizations’ civil offenses beyond the 2% annual revenue ceiling nor provides guidance on the process for disputing or appealing a fine. Rather, the DPA will specify such procedures in subsequent regulations.
Criminal Liability
Courts will impose criminal liability on both individuals and organizations in two particular circumstances: when they intentionally collect, disclose, or use personal data that does not belong to them to benefit themselves at the harmful expense of others (Art 65), and when they intentionally create false personal data to benefit themselves or which may result in harm to others (Art 66).
Unlawful Collection, Disclosure, or Use – Under Article 67, a person that unlawfully collects or uses personal data that falls under the criminal provisions of the law could receive maximum imprisonment of five years and/or a maximum fine of 5 billion rupiah. Those that disclose information, in the same manner, may face up to four years in jail and/or a maximum fine of 4 billion Rupiah. In all circumstances, authorities may confiscate profits or assets obtained from the criminal offense (Art 69).
Unlawful Creation of False Data – Article 68 imposes a similar penalty for individuals and organizations that intentionally create false data. In these circumstances, a court may impose a six-year term of imprisonment, a maximum fine of 6 billion rupiah, and/or confiscate assets obtained in the illegal act.
While corporations may only be fined for criminal offenses, the PDP Law specifies that managers, high-ranking officers, or certain owners of the corporation could be incarcerated and personally fined for their actions (Art 70). However, corporations could receive a fine ten times the amount of the maximum fine imposed on an individual or corporate officer and be subject to other punishments including:
Seizure of profits or assets obtained in the criminal offense;
Revocation of licenses, business operations, or physical offices; and/or
Dissolution of the corporation or permanent ban on certain operations.
The PDP Law stipulates procedures and timelines for complying with a criminal penalty, including punishments for failing to pay or resolving disputes in auctioned property.
As a reminder, individuals also have a “right to sue and receive compensation” in cases where controllers violate the law, according to Art 12 of the PDP Law (see Section 7).
Concluding Notes
Indonesia’s new law expands comprehensive protection of personal data to approximately 275 million people. Substantively, the law fits well in the big picture that is becoming the Global Privacy landscape, with landmark features like lawful grounds for processing, principles of processing inspired by FIPPs, a strong set of data subject rights – including in relation to ADM, accountability, broad scope of application and extraterritoriality. However, it maintains some specificity, and it enriches the landscape with unique features, like specifically defining “personal data of a general nature” in opposition to “specific data”, or criminalizing the intentional creation of false data.
Notably, the Indonesian Data Protection Law also shows that data localization proposals can also lose terrain, not only advance. The passing of the PDP Law is significant, and it proves that Asia Pacific is one of the most vibrant regions of the world when it comes to data protection and privacy regulation. The adoption of the PDP Law also comes as Indonesia is holding the Presidency of G20 this year – while the data protection world is keeping an eye on India and its back-and-forth efforts to pass a comprehensive data protection law as it prepares to take over the G20 Presidency next year.
FPF Releases Analysis of California’s New Age-Appropriate Design Code
FPF’s Youth & Education team is pleased to publish a new policy brief that builds on this first brief by providing a comparative analysis of the United Kingdom’s Age Appropriate Design Code (UK AADC) to the California AADC, which was modeled after the UK AADC. Learn more and download the UK and CA AADC Comparative policy brief here.
New report outlines the key components of California’s Age-Appropriate Design Code Act and critical pending questions
As federal and state policymakers heighten their focus on protecting children’s privacy online, the Future of Privacy Forum (FPF) today released a new policy brief, An Analysis of the California Age-Appropriate Design Code. The new report outlines and analyzes Assembly Bill 2273, the California Age-Appropriate Design Code Act (AADC), a first-of-its-kind privacy-by-design law that represents a significant change in both the regulation of the technology industry and how children will experience online products and services.
Download An Analysis of the California Age-Appropriate Design Code here.
“While policymakers from both sides of the aisle are increasingly prioritizing efforts to secure new protections for children online, in the absence of federal action, California, as it did on consumer privacy, has taken a big step on its own,” said Chloe Altieri, Youth & Education Privacy policy counsel for FPF and an author of the report. “Big changes like this bring a lot of questions and there’s a lot we still don’t know – including exactly what services this bill would apply to. But as policymakers, online service providers, regulators, and others move towards implementation, we wanted to start with assessing what we do know – and flag some of the key unanswered questions.”
The California AADC is notable for extending far beyond the scope of the primary federal children’s online privacy law, the Children’s Online Privacy Protection Act (COPPA), in several key ways. For example, the California AADC raises the baseline age of protection to youth under age 18 (COPPA defines “child” as under age 13) and applies to online businesses with products, services, and features “likely to be accessed by a child,” casting a wider net than COPPA’s current standard of covering sites “directed to children” under 13.
The policy brief expands on those elements of the California AADC and others, including:
“California has a long history of being a first-mover on consumer privacy protections in the U.S., and it seems very likely that we will start to see these types of child-centered design principles become an increasingly influential model for future legislation and regulation,” said Bailey Sanchez, Youth and Education Privacy policy counsel at FPF and an author of the report. “In fact, about a week after this bill was signed into law, we saw the first example of that, with a similar children’s code bill introduced in New York.”
FPF’s youth and education privacy team has closely tracked the progress of the California AADC; catch up on previous blog posts from June 28 and a September 1 update, and read our statement on the final bill here.
With the withdrawal, India finds itself in a paradoxical position: privacy is a constitutionally protected right, but no meaningful statutory data protections or privacy protections exist. What could explain this volte-face by the Government, after it led four years of public consultation and ministerial deliberation to develop the draft Bill? How did India arrive at this point, and what lies ahead?
In this post, we canter through the history of India’s much-awaited (and now defunct) Personal Data Protection Bill (PDP Bill) and its withdrawal. We tease apart the reasons and realpolitik behind the withdrawal and consider what lies ahead for data protection in India.
How did we get here?
The PDP Bill was not the first time that attempts had been made to create a comprehensive national privacy legislation for India.
A decade ago, attempts were made to create privacy legislation following the release of the Government’s 2010 Approach Paper on the Legal Framework for Privacy. The paper identified the need for privacy and data protection legislation given the privacy risks of several largescale national ICT-based programs being initiated, especially India’s universal digital identity program called Aadhaar. The Government then constituted a Committee of Experts (chaired by Justice AP Shah) to consider these issues, who in their final report of 2012 also recommended the creation of privacy legislation for India. Three versions of proposed privacy legislations were “leaked” between 2011 and 2014, but these efforts stalled during an election year and were never resurrected.
The public and legal debate around privacy, however, continued in this period, coming to a head in 2017—once again in connection with Aadhaar. The Supreme Court of India had been hearing a raft of petitions that challenged the constitutionality of the Aadhaar system on the basis that it infringed on Indians’ right to privacy. A central question facing the Court was whether privacy was a fundamental right in India. The reference to this question was made to a nine-judge constitutional bench to definitively settle the question in Indian law.
In the 2017 decision ofJustice K.S. Puttaswamy v Union of India, the Supreme Court affirmed that privacy (including informational privacy) was protected under the Constitution of India. More practically, the decision played a role in forcing the hand of the Executive to create legislation on privacy and data protection.
In the background of the debates around the Puttaswamy matter, the Government had created a Committee of Experts (chaired by Justice BN Srikrishna) in 2017 to suggest a draft data protection law. The Supreme Court specifically referred to the efforts of this Committee and noted its expectation (see para 185, page 260 of the lead judgment) that the Government would create a data protection regime. This renewed process to create a data protection law for India resulted in widespread discussion around the substantive principles that India should operationalize into a law.
So 2022 dawned with much excitement that the next (and potentially final stage) for the Bill would arrive, with its re-introduction into Parliament for further consideration or passage.
So why was the PDP Bill withdrawn?
The Government’s reported reason for the withdrawal of the PDP Bill was that the changes suggested by the Joint Parliamentary Committee were so numerous, that it was deemed fit to remove and replace it with a new over-arching legislative package. The Joint Committee’s report proposed over 80 changes to the text of the Bill. However, commentators have noted that many of these could have been incorporated into the draft if the Government had the will. Few expected that these changes would result in wholesale eschewing of the Bill. So what could be the reason for this unexpected withdrawal?
A closer look at the unresolved issues in the PDP Bill at the time of its withdrawal, and responses from certain stakeholders, provide some clues to interests behind the move.
First, a key issue facing resistance related to cross-border data flows. Broadly, the PDP Bill sought to put in place (soft) data localization with a “green lighting” system overseen by the Central Government, which had been a major source of discomfort for many global industry players with major commercial and foreign policy implications for India. This opposition was also reflected in the involvement of the US Government, including flagging the “harms” of the PDP Bill in the United States Trade Representative’s Special 301 report in 2022.
Second, the PDP Bill was squarely in the crosshairs of the broader stand-off between the Indian Government and US-based large technology companies, especially social media intermediaries, given their perceived role in a range of recent political and social events. The traditional “safe harbour” from liability for content for intermediaries is being questioned and revisited. We wrote about new rules for intermediaries passed in 2021, to which amendments are already being considered. The remit of the PDP Bill had expanded during its evolution to include norms for a category of “social media intermediaries” with provisions for additional oversight over their data processing which had faced pushback.
The withdrawal of the Bill is seen by some as the result of this dynamic. Within industry in India, reactions to the withdrawal were mixed, with many disappointed at being thrown back into legal uncertainty after years of engagement and preparation for the Bill.
A third major issue that had been a source of concern related to the unprecedented exemptions for Government agencies from the provisions of the supposedly “horizontally-applicable” data protection framework. These exemptions were so wide that they risked setting up a “two-speed” data protection law, with widely varying obligations and standards for public and private sector entities. These exemption had raised concerns in India of both industry players and civil society. Outside India, a 2021 report commissioned by the European Data Protection Board on government access to personal data in third countries called out the Indian proposals for their wide exemptions and differential data protection obligations for the Indian government.
However, it is unclear whether the withdrawal of the Bill signals a recognition—or subversion—of these concerns. The Joint Parliamentary Committee failed to recommend constraints to draft section 35 of the PDP Bill that enabled blanket exemptions to Government, despite six of the Committee members filing dissent notes to mark their concerns with the provision.
Lastly, an overarching concern was that the PDP Bill’s mandate had grown unmanageably in the course of its negotiation. The Bill faced the “kitchen sink” problem: a range of issues that are not traditionally in the remit of data protection regulation were added into the draft legislation through its various iterations. A flavor of some of the additions to this “kitchen sink” were:
proposals to include the regulation of the use of “non-personal data” within the mandate of the Bill (even while a separate committee was considering the appropriate regulatory framework for this);
proposals to create a “sandbox” administered by the Data Protection Authority, even while other regulators (notably in the financial sector) are already running sandboxes;
recommendations in the Joint Parliamentary Committee’s report to create an Indian equivalent to SWIFT (the global payments instructions system); and
recommendations in the Joint Parliamentary Committee’s report for new regulations for hardware manufacturers of devices collecting personal data.
The widening of the ambit of the Bill seemed to have led it astray from its early mandate of protecting informational privacy and providing a data protection framework for a fair digital economy in India.
Apart from creating tensions and dissonances within the Bill, this over-extension also ultimately seems to signal the difficulties for the Government to consider wider digital economy issues independently of a data protection framework. As the view of personal data as a national asset to be harnessed for growth and innovation takes deeper roots among decision-makers, it seems clear that any future data protection regime for India will necessary evolve only alongside broader frameworks around data accessibility and use.
What happens next?
While withdrawing the PDP Bill, India’s Minister for Information Technology, Ashwini Vaishnaw stated that Government is planning a new, comprehensive legislative package. The Minister of State for Electronics and Information Technology, Rajeev Chandrasekhar, has made several statements regarding plans for a new “Digital India Act” to re-vamp India’s broader Information Technology Act 2000.
Legal commentators closely following these developments, such as technology law firm Ikigai Law, have noted the exceptionally wide range of issues that this new package is set to cover: from cybercrime to emerging technologies, intermediary regulation, and digital competition issues. This reflects the broader position of the Indian Government, as it seeks to keep its regulatory options open even while it evolves a coherent stance on various aspects of technology government.
Especially in the post-pandemic environment, there has been increased appetite among policymakers to see data as an asset that can propel growth and innovation. The trend is seen in other jurisdictions, too, including the direction in recent European proposals flowing from the European data strategy. However, the concern is that the accent on data use and monetization for growth could limit the political will to introduce privacy protections. Old narratives that pitch privacy protections in opposition to innovation and private-sector business opportunities are re-emerging. Meanwhile, the underlying issue of carve-outs for the State’s data use, and state surveillance in the aftermath of the Pegasus scandal in India are yet to be substantively addressed by Government and policymakers.
The withdrawal of the PDP Bill comes as an increasing number of countries adopt comprehensive data protection legislation. Others in India’s neighborhood, including China, Indonesia, and Bangladesh, have enacted – or are very close to enacting, their data protection laws. Even traditional outliers like the US have made moves towards considering a federal data protection regime, making it increasingly hard to defend the absence of a robust data protection regime in India in the global arena.
With India assuming the presidency of the G20 in December 2022, the Government’s approach to existing G20 efforts, such as the Data Free Flow with Trust initiative (spearheaded by Japan), will be sharply back in focus. In the past, India has opposed and deferred joining such efforts, on the basis that it is in the process of preparing its regulatory frameworks on data protection and e-commerce. With the withdrawal of the PDP Bill, the Government’s real intent to create clarity on these frameworks will be scrutinized in the international community and locally.