Manipulative Design: Defining Areas of Focus for Consumer Privacy

In consumer privacy, the phrase “dark patterns” is everywhere. Emerging from a wide range of technical and academic literature, it now appears in at least two US privacy laws: the California Privacy Rights Act and the Colorado Privacy Act (which, if signed by the Governor, will come into effect in 2025).

Under both laws, companies will be prohibited from using “dark patterns,” or “user interface[s] designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision‐making, or choice,” to obtain user consent in certain situations–for example, for the collection of sensitive data.

When organizations give individuals choices, some forms of manipulation have long been barred by consumer protection laws, with the Federal Trade Commission and state Attorneys General prohibiting companies from deceiving or coercing consumers into taking actions they did not intend or striking bargains they did not want. But consumer protection law does not typically prohibit organizations from persuading consumers to make a particular choice. And it is often unclear where the lines fall between cajoling, persuading, pressuring, nagging, annoying, or bullying consumers. The California and Colorado laws seek to do more than merely bar deceptive practices; they prohibit design that “subverts or impairs user autonomy.”

What does it mean to subvert user autonomy, if a design does not already run afoul of traditional consumer protections law? Just as in the physical world, the design of digital platforms and services always influences behavior — what to pay attention to, what to read and in what order, how much time to spend, what to buy, and so on. To paraphrase Harry Brignull (credited with coining the term), not everything “annoying” can be a dark pattern. Some examples of dark patterns are both clear and harmful, such as a design that tricks users into making recurring payments, or a service that offers a “free trial” and then makes it difficult or impossible to cancel. In other cases, the presence of “nudging” may be clear, but harms may be less clear, such as in beta-testing what color shades are most effective at encouraging sales. Still others fall in a legal grey area: for example, is it ever appropriate for a company to repeatedly “nag” users to make a choice that benefits the company, with little or no accompanying benefit to the user?

In Fall 2021, Future of Privacy Forum will host a series of workshops with technical, academic, and legal experts to help define clear areas of focus for consumer privacy, and guidance for policymakers and legislators. These workshops will feature experts on manipulative design in at least three contexts of consumer privacy: (1) Youth & Education; (2) Online Advertising and US Law; and (3) GDPR and European Law. 

As lawmakers address this issue, we identify at least four distinct areas of concern:

This week at the first edition of the annual Dublin Privacy Symposium, FPF will join other experts to discuss principles for transparency and trust. The design of user interfaces for digital products and services pervades modern life and directly impacts the choices people make with respect to sharing their personal information. 

ITPI Event Recap – The EU Data Strategy and the Draft Data Governance Act

On May 19, 2021, the Israel Tech Policy Institute (ITPI), an Affiliate of The Future of Privacy Forum (FPF), hosted, together with the Tel Aviv University, The Stewart & Judy Colton Law and Innovation Program, an online event on the European Union’s (EU) Data Strategy and the Draft Data Governance Act (DGA).

The draft DGA is one of the proposed legislative measures for implementing the European Commission’s 2020 European Strategy for Data (EU Data Strategy), whose declared goal is to give the EU a “competitive advantage” by enabling it to capitalise on its vast quantity of public and private sector-controlled data. The DGA will establish a framework for using more data held by the public sector, regulating and increasing trust in data intermediaries and similar service providers that provide data sharing services, and data altruism (i.e., data voluntarily made available by individuals or companies for the “general interest”).

While speakers also addressed other proposals tabled by the European Commission (EC) for regulating players in the data economy – such as the Digital Services Act (DSA) and the Digital Markets Act (DMA) -, most of the discussion revolved around the DGA’s expected impact and underlying policy drivers. 

Both Prof. Assaf Hamdani, from the Tel Aviv University’s Law Faculty, and Limor Shmerling Magazanik, Managing Director at ITPI, assumed moderating roles during the event, which counted on the input of speakers from the EC, the Massachusetts Institute of Technology (MIT), Covington & Burling LLP, Mastercard and the Israeli Government’s ICT Authority.

The recording of the event is available here.

The DGA as a tool for trustworthy data sharing

Maria Rosaria Coduti, Policy Officer at the EC’s Directorate-General for Communications Networks, Content and Technology (DG CNECT), started by outlining the factors that drove the EC to put forward the EU Data Strategy. As the EC acknowledges the potential of data usage for the benefit of society, it intends to harness it through bolstering the functioning of the single market for data, with due regard to privacy, data protection and competition rules. 

To attain that goal, there were questions that needed to be addressed to increase the trust in the exchange of such data. Those included a lack of clarity on the legal and technical requirements applicable to the re-use of data and the sharing of data by public bodies, as well as the creation of European-based data storage solutions. On the other hand, the EC also identified the need of further empowering data subjects through voluntary data sharing, as a complement to their data portability right under the General Data Protection Regulation (GDPR). 

According to the EC official, the EU Data Strategy rests on 4 main pillars: 1) a cross-sectoral governance framework for boosting data access and use; 2) significant investments in European federated cloud infrastructures and interoperability; 3) empowering individuals and SMEs in the EU with digital skills and data literacy; 4) the creation of common European Data Spaces in crucial sectors and public interest domains, through data governance and practical arrangements. 

The DGA itself, which intends to create trust in and democratise the data sharing ecosystem, also focuses on four main aspects: 1) the re-use of sensitive public sector data, by addressing “obstacles” to data sharing, complementing the EU’s Open Data Directive – together with an upcoming Implementing Act on High-Value Datasets under Article 14 of that Directive – and building on Member-States’ access regimes; 2) business-to-business (B2B) and consumer-to-business (C2B) data sharing through dedicated, neutral and duly notified service providers (“data intermediaries”); 3) data altruism, enabling individuals to share their personal data for the common good with registered non-profit dedicated organisations, notably through signing a standard to-be-developed European data altruism broad consent form; 4) the European Data Innovation Board, which shall be an expert group with EC secretariat, focused on technical standardisation and on harmonising practices around data re-use, data intermediaries and data altruism. 

Specifically on the topic of data intermediaries, and replying to questions from the audience, Maria Rosaria Coduti mentioned the importance of ensuring they remain neutral. This entails that intermediaries should not be allowed to process the data they are entrusted with for their own purposes. In this respect, Recital 22 of the draft DGA excludes cloud service providers, and entities which aggregate, enrich or transform the data for subsequent sale (e.g., data brokers) from its scope of application. 

It should be noted that the third Compromise Text released by the Portuguese Presidency of the Council of the EU opens the possibility for intermediaries to use the data for service improvement, as well as to offer “specific services to improve the usability of the data and ancillary services that facilitate the sharing of data, such as storage, aggregation, curation, pseudonymisation and anonymisation.”

Coduti underlined the importance of preventing any conflicts of interest for intermediaries. Data sharing services must thus be rendered through a legal entity separate from the other activities of the intermediary, notably when the latter is a commercial company. This would, in principle, mean that legal entities acting as intermediaries under the DGA would not be covered by the proposed DSA as hosting service providers, nor as “gatekeepers” under the DMA proposal. Ultimately, the EC wishes intermediaries to flourish, by becoming a trusted player and a valuable tool for businesses and individuals in the data economy. 

The upcoming Data Act: facilitating B2B and B2G data sharing

Lastly, the EC official briefly addressed the upcoming EU Data Act. In parallel with the revision of the 25-year old Database Directive, this piece of legislation will focus on Business-to-Business (B2B) and business-to-government (B2G) data sharing. Its aim will be to maximise the use of data for innovation in different sectors and for evidence-based policymaking, without harming the interests of companies that invest in data generation (e.g., companies that produce smart devices and sensors). 

While the Data Act will not address the issue of property rights over data, it will seek to strike down contractual and technical obstacles to data sharing and usage in industrial ecosystems (e.g., in the mobility, health, energy and agricultural spaces). Coduti stressed that the discussion around data ownership is complex, due to the proliferation of data obtained from IoT devices and the emergence of edge computing, as well as the necessary balance between keeping utility of datasets and safeguarding data subjects’ rights through anonymisation. 

On the latter point, Rachel Ran, Data Policy Manager at the Israeli Government ICT Authority, echoed Coduti’s concerns, stating that data cannot be universally open. According to the Israeli official, there is a tradeoff between data utility and individual privacy that has to be accepted, but questions remain about the level of involvement that governments should have in determining this balance.

On May 28, the EC released its Inception Impact Assessment on the DA. Stakeholders are encouraged to provide their feedback over a 4-week period. 

An increasingly complex digital regulatory framework in the EU

Henriette Tielemans, IAPP Senior Westin Research Fellow, offered a comprehensive overview of the EC’s data-related legislative proposals which were tabled in the last six months, other than the DGA. The trilogue work currently underway in the EU institutions on the proposed ePrivacy Regulation was also mentioned.  

In the context of its Strategy for Artificial Intelligence (AI), the EC has very recently published a proposal for a Regulation laying down harmonised rules on AI. Tielemans saw this proposal as important and groundbreaking, suggesting that the EC is looking to set standards also beyond EU borders, like it did with the GDPR. She stressed that the proposal takes a cautious risk-based approach.

Tielemans highlighted the proposal’s dedicated provision (Article 5) on banned AI practices, arguing that the most contentious amongst those should be real-time remote biometric identification for law enforcement purposes in public spaces. However, she noted that the paragraph contained wide exceptions to the prohibition, allowing law enforcement authorities to use facial recognition in that context, subject to conditions. Tielemans predicted that the provision will be a “hot potato” during the Regulaiton’s negotiations, notably within the Council of the EU. 

Furthermore, Tielemans stressed that “high-risk AI systems”, which are the major focus of the proposal, are not defined thereunder, as the EU intends to ensure the Regulation is future-proof. However, Annex III forwards a provisional list of high-risk AI systems, like systems used for educational and vocational training (e.g., to determine who will be admitted to a given school). Annex II is more complex, due to the interaction with other EU acts: if there are products subject to a third-party conformity assessment (under other EU laws) where the provider would like to integrate an AI component, that would be considered high-risk AI. Tielemans also noted that, once a system is qualified as high-risk, then providers acquire a number of obligations on training models, record keeping, among others. 

On the DSA, Tielemans pointed out that the proposal is geared towards providers of online intermediary services. It provides rules on the liability of providers for third-party content and how they should conduct content moderation. In principle, she stressed, such providers shall not be liable for information which is conveyed or stored through their services, although they are burdened with takedown – but no general monitoring – obligations. The proposal distinguishes between different types of providers, with their respective obligations matching their role and their degree of importance in the online ecosystem. Hosting service providers have less obligations than online platforms, and the latter less so than very large ones: a sort of obligation “cascade”. 

Lastly, Tielemans concisely mentioned the DMA and the revised Directive on Security of Network and Information Systems (NIS 2 Directive) as other noteworthy EC initiatives, identifying the former as a competition law toolbox and an “outlier” in the EU Data Strategy. She also pinpointed the latter’s broader scope, increased oversight and heavier penalties as important advances in the EU’s cybersecurity framework. 

Reconciling “overarching” with “sectoral” regulation on data sharing

Helena Koning, Assistant General Counsel, Global Privacy Compliance Assurance & Europe Data Protection Officer at Mastercard, provided some thoughts about the draft DGA’s potential impact on the financial industry.

She started by outlining the actors involved in the data sharing ecosystem. These include: (i) individuals, who demand data protection and responsible use of data; (ii) businesses, that wish to innovate through data usage and insights drawn from data, notably by personalising their products and services; (iii) policymakers, who increasingly regulate data usage; and (iv) regulators with bolstered enforcement powers in this space.

Then, Koning stressed that companies in the financial sector are currently subject to a significant regulatory overlap when it comes to data collection and sharing, with the ePrivacy Directive applying to terminal equipment information, GDPR applying to personal data in general and the Second Payment Services Directive (PSD2) covering payment data sharing. While there is already guidance by the European Data Protection Board (EDPB) on the interplay between the GDPR and PSD2, Tielemans added that lawmakers tend to regulate in siloes, adopting overlapping and sometimes conflicting definitions and obligations. This results in financial sector players being pushed to wear very different hats under each framework (e.g. payment service providers and data controllers). In this regard, Tielemans said that the EC should place further efforts in ensuring consistency between EU acts before proposing new legislation.

Koning showed concern that instruments such as the DGA and the Data Act will add to this regulatory complexity and that SMEs and citizens will have a hard time complying with and understanding the new laws. On this point, she addressed the fact that the DGA and PSD2 have diverging models for fostering data-based innovation: as an illustration, while PSD2 mandates banks to share customer data with fintechs, free of charge and upon the customer’s contractual consent, the DGA centres around voluntary data sharing, for which public bodies may charge fees and data subjects are called to give GDPR-aligned consent. 

Furthermore, Koning expressed doubts about the immediate benefit that data holders and subjects would get from sharing their data with intermediaries, often in exchange of service fees.

Alternatives to data sharing and focus on data insights

Dr. Thomas Hardjono, from MIT Connection Science & Engineering, develops research on using data to better understand and solve societal issues, including the spread of diseases and addressing inequality. Hardjono started by congratulating the direction taken by the EC with the DGA, stating that his group at MIT had been studying issues relating to commoditization of personal data since the publication of a 2011 World Economic Forum report. In Hardjono’s view, public data is a societal asset that should be treated as carefully and comprehensively as personal data.  

On that point, Rachel Ran mentioned that governments should seek to encourage data sharing through data governance and to centre their policies around the needs of data subjects. She added that data products – like Application Programming Interfaces (APIs) – should be human-centered. Data should be seen as a product, but not a commodity, especially when it comes to sharing government data. 

Ran continued by describing one of the Israeli ICT Authority’s major projects: creating standard APIs for G2G and G2B data sharing. But there are significant challenges to such tasks, including: (i) unstructured and fragmented data; (ii) duplicated records and gaps; and (iii) inconsistent data formats and definitions. This ultimately leads to suboptimal decision-making by government bodies, given they are not properly informed by accurate and updated data.  

About data sharing services, Hardjono stated that data intermediaries regulated under the DGA may face specific hurdles, notably on the level of intelligibility of data conveyed to data users. There are questions on whether the draft DGA’s prohibition on intermediaries to aggregate and structure data could prevent them from developing services which are interesting for potential data users. Koning added that a number of data sharing collaborations are already in place and that new EU regulation should facilitate rather than prevent them.

On that topic, Hardjono mentioned communities would be more interested in accessing insights and statistics about their citizens’ activity (e.g., on transportation, infrastructure usage and spending patterns), rather than large sets of raw data. On the other hand, aggregated data could be publicly shared with the wider society.

As a solution, Hardjono proposed developing and making available Open Algorithms, allowing data users (e.g., a municipality) to access specific datasets of their interest and to directly ask questions to and obtain insights from data holders about such datasets, through APIs. This would also avoid moving the data around, by keeping it with data holders.

Then another question arises, according to Hardjono: given the commercial value of data insights, there should be business incentives, possibly via fair remuneration, to gather, structure and analyze the data. In that context, Hardjono stressed that clarifying the intermediaries’ business model is crucial and should be addressed by the DGA. He also suggested that a joint remuneration model, shared between the public sector and data users, could be devised. Moreover, this leads to novel doubts about data ownership, notably about who owns the insights (the holder or the intermediary?) and on what title: could they be considered as the provider’s intellectual property?

Upon the observation from Prof. Assaf Hamdani that some cities are now imposing or incentivising companies and citizens to share data through administrative procedures and contracts, Hardjono regretted that the DGA did not devote enough attention to so-called data cooperatives. While Article 9(1)(c) of the DGA does offer a description of the services that data cooperatives should offer to data subjects and SMEs (including assisting them in negotiating data processing terms), there is an extensive academic discussion in the US about other roles these cooperatives could play in defending citizens’ interests, that could feed into the DGA debates. On the issue of data cooperatives, Ran held that such cooperatives should address data subjects’ needs and share data for a specified purpose, praising the DGA model in that regard. 

Lastly, Hardjono highlighted the fact that certain datasets may have implicit bias and that algorithms used to analyze such data may thus be implicitly biased. Therefore, he held that ensuring algorithmic auditing and fairness is key to achieving good societal results from the usage of the large volumes of data at the relevant players’ disposal. 

Ran added that, besides trustworthy, data should also be discoverable, interoperable (with common technical standards) and self-describing, to facilitate its sharing and ensure its usefulness.

For further reading, you can check out:

ITPI & FPF’s Report: “Using Health Data for Research: Evolving National Policies

FPF’s Report: “Understanding Corporate Data Sharing Decisions: Practices, Challenges, and Opportunities for Sharing Corporate Data with Researchers

FPF’s Event Report: “Brussels Privacy Symposium 2020 – Research and the Protection of Personal Data under the GDPR

Preemption in US Federal Privacy Laws

As federal lawmakers consider proposals for a federal baseline privacy law in the United States, one of the most complex challenges is federal preemption, or the extent to which a federal law should nullify the state laws on the books and the emerging laws addressing the collection and use of personal information.

Many recognize the benefits to businesses and consumers of establishing uniform national standards for the collection, transfer, and sale of commercial personal information, if those standards are strong and flexible enough to meet new challenges that arise. Such standards will require, to at least an extent, replacing individual state efforts. At the same time, however, there are hundreds of state privacy laws on the books. Many of these laws have a uniquely local character, such as laws governing student records, medical information, and library records. Preemption only becomes more complicated as additional states join recent leaders such as Virginia, California, and Colorado, to pass omnibus data privacy laws that apply to data collected across borders from websites, apps, and other digital services.

What can we learn from how existing federal privacy laws have addressed preemption? As a starting point, FPF staff have surveyed twelve (12) federal sectoral privacy laws passed between 1968-2003, and examined the extent to which they preempt similar state privacy laws. A comprehensive consumer privacy law would almost certainly preserve most of these sectoral laws and their state counterparts. They provide a useful insight into how Congress has addressed federal preemption in the past.

In surveying these 12 federal privacy laws, we observe a few notable features, and offer some thoughts (below) on what factors have influenced Congressional decisions about preemption:

Factors Influencing Preemption Decisions

Given the case-by-case variability described above and in the Discussion Draft, what determines when and how Congress has chosen to preempt state and local regulations that overlap or supplement federal privacy laws?

Congress is a political body, and politics surely play a role. But our analysis suggests that Congress pursues an overall goal of balancing individual rights with practical business compliance. We suggest that Congress pursues those goals by weighing several factors aside from political considerations. This likely include, for example: (1) the existence of national consensus on harmful business practices (versus expected regional variation in what is considered harmful); (2) the comprehensiveness or prescriptive nature of the law; (3) the national versus localized nature of business practices; and (4) the localized nature of data (which is sometimes, but not always, related to identifiability of data).

For example, a key difference between the federal commercial emailing law, CAN-SPAM (very preemptive), and the federal commercial telemarketing law, the Telephone Consumer Protection Act (not preemptive except with respect to certain inter-state standards) is the relative ease with which the personal data being regulated can be localized, or have its geographic location readily inferred. Email addresses, despite being personal information, give no indication of the owner’s location, while residential phone numbers were straightforward to relate to a particular state when the law was drafted in 1991. 

Thus, while differing state telemarketing laws can present compliance costs for marketing companies operating across state lines, such laws do not create impractical barriers to compliance. In addition, telemarketing represents an issue on which there may be much more regional variation than national consensus on appropriate local business practices: for example, some states ban political calls, some ban calls during certain times of day, and some maintain additional do-not-call registries (such as Texas’s do-not-call registry for businesses, to allow them to avoid commercial calls from electricity providers). 

As a contrasting example, the Fair Credit Reporting Act (largely preemptive), in 1970 represented a strong national consensus on appropriate business practices applicable primarily to three dominant credit bureaus in the United States, all operating effectively nationwide. At the same time, credit reports are involved in relatively localized business practices and involve identifiable information from which location can usually be inferred (e.g. from home addresses). As a result, business compliance with different state standards may not have been impossible, but was perhaps, ultimately, not desirable due to the comprehensive and prescriptive nature of the law and the relative national consensus on appropriate norms for credit bureaus.

These factors are just some of the myriad considerations that we suggest may influence preemption decisions for a federal privacy law, if the goal is to balance consumer privacy interests against concerns about practical business compliance. Further research might include, for example, a review of Congressional histories, or learning from other, non-privacy federal laws. We welcome feedback on the Discussion Draft.

Read the next blog in this series: Navigating Preemption through the Lens of Existing State Privacy Laws.

India’s new Intermediary & Digital Media Rules: Expanding the Boundaries of Executive Power in Digital Regulation

tree 200795 1920

Author: Malavika Raghavan

India’s new rules on intermediary liability and regulation of publishers of digital content have generated significant debate since their release in February 2021. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the Rules) have:

The majority of these provisions were unanticipated, resulting in a raft of petitions filed in High Courts across the country challenging the validity of the various aspects of the Rules, including with regard to their constitutionality. On 25 May 2021, the three month compliance period on some new requirements for significant social media intermediaries (so designated by the Rules) expired, without many intermediaries being in compliance opening them up to liability under the Information Technology Act as well as wider civil and criminal laws. This has reignited debates about the impact of the Rules on business continuity and liability, citizens’ access to online services, privacy and security. 

Following on FPF’s previous blog highlighting some aspects of these Rules, this article presents an overview of the Rules before deep-diving into critical issues regarding their interpretation and application in India. It concludes by taking stock of some of the emerging effects of these new regulations, which have major implications for millions of Indian users, as well as digital services providers serving the Indian market. 

1. Brief overview of the Rules: Two new regimes for ‘intermediaries’ and ‘publishers’ 

The new Rules create two regimes for two different categories of entities: ‘intermediaries’ and ‘publishers’.  Intermediaries have been the subject of prior regulations – the Information Technology (Intermediaries guidelines) Rules, 2011 (the 2011 Rules), now superseded by these Rules. However, the category of “publishers” and related regime created by these Rules did not previously exist. 

The Rules begin with commencement provisions and definitions in Part I. Part II of the Rules apply to intermediaries (as defined in the Information Technology Act 2000 (IT Act)) who transmit electronic records on behalf of others, and includes online intermediary platforms (like Youtube, Whatsapp, Facebook). The rules in this part primarily flesh out the protections offered in Section 79 of India’s Information Technology Act 2000 (IT Act), which give passive intermediaries the benefit of a ‘safe harbour’ from liability for objectionable information shared by third parties using their services — somewhat akin to protections under section 230 of the US Communications Decency Act.  To claim this protection from liability, intermediaries need to undertake certain ‘due diligence’ measures, including informing users of the types of content that could not be shared, and content take-down procedures (for which safeguards evolved overtime through important case law). The new Rules supersede the 2011 Rules and also significantly expand on them, introducing new provisions and additional due diligence requirements that are detailed further in this blog. 

Part III of the Rules apply to a new previously non-existent category of entities designated to be ‘publishers‘. This is further classified into subcategories of ‘publishers of news and current affairs content’ and ‘publishers of online curated content’. Part III then sets up extensive requirements for publishers to adhere to specific codes of ethics, onerous content take-down requirements and three-tier grievance process with appeals lying to an Executive Inter-Departmental Committee of Central Government bureaucrats. 

Finally, the Rules contain two provisions that apply to all entities (i.e. intermediaries and publishers) relating to content-blocking orders. They lay out a new process by which Central Government officials can issue directions to delete, modify or block content to intermediaries and publishers, either following a grievance process (Rule 15) or including procedures of “emergency” blocking orders which may be passed ex-parte. These Rules stem from powers to issue directions to intermediaries to block public access of any information through any computer resource (Section 69A of the IT Act). Interestingly, these provisions have been introduced separately from the existing rules for blocking purposes called the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

2. Key issues for intermediaries under the Rules

2.1 A new class of ‘social media intermediaries

The term ‘intermediary’ is a broadly defined term in the IT Act covering a range of entities involved in the transmission of electronic records. The Rules introduce two new sub-categories, being:

Given that a popular messaging app like Whatsapp has over 400 million users in India, the threshold appears to be fairly conservative. The Government may order any intermediary to comply with the same obligations as SSMIs (under Rule 6) if their services are adjudged to pose a risk of harm to national security, the sovereignty and integrity of India, India’s foreign relations or to public order.  

SSMIs have to follow substantially more onerous “additional due diligence” requirements to claim the intermediary safe harbour (including mandatory traceability of message originators, and proactive automated screening as discussed below). These new requirements raise privacy concerns and data security concerns, as they extend beyond the traditional ideas of platform  “due diligence”, they potentially expose content of private communications and in doing so create new privacy risks for users in India.    

2.2 Additional requirements for SSMIS: resident employees, mandated message traceability, automated content screening 

Extensive new requirements are set out in the new Rule 4 for SSMIs. 

Provisions to mandate modifications to the technical design of encrypted platforms to enable traceability seem to go beyond merely requiring intermediary due diligence. Instead they appear to draw on separate Government powers relating to interception and decryption of information (under Section 69 of the IT Act). In addition, separate stand-alone rules laying out procedures and safeguards for such interception and decryption orders already exist in the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009. Rule 4(2) even acknowledges these provisions–raising the question of whether these Rules (relating to intermediaries and their safe harbours) can be used to expand the scope of section 69 or rules thereunder. 

Proceedings initiated by Whatsapp LLC in the Delhi High Court, and Free and Open Source Software (FOSS) developer Praveen Arimbrathodiyil in the Kerala High Court have both challenged the legality and validity of Rule 4(2) on grounds including that they are ultra vires and go beyond the scope of their parent statutory provisions (s. 79 and 69A) and the intent of the IT Act itself. Substantively, the provision is also challenged on the basis that it would violate users’ fundamental rights including the right to privacy, and the right to free speech and expression due to the chilling effect that the stripping back of encryption will have.

Though the objective of the provision is laudable (i.e. to limit the circulation of violent or previously removed content), the move towards proactive automated monitoring has raised serious concerns regarding censorship on social media platforms. Rule 4(4) appears to acknowledge the deep tensions that this requirement raises with privacy and free speech concerns, as seen by the provisions that require these screening measures to be proportionate to the free speech and privacy of users, to be subject to human oversight, and reviews of automated tools to assess fairness, accuracy, propensity for bias or discrimination, and impact on privacy and security. However, given the vagueness of this wording compared to the trade-off of losing intermediary immunity, scholars and commentators are noting the obvious potential for ‘over-compliance’ and excessive screening out of content. Many (including the petitioner in the Praveen Arimbrathodiyil matter) have also noted that automated filters are not sophisticated enough to differentiate between violent unlawful images and legitimate journalistic material. The concern is that such measures could create a large-scale screening out of ‘valid’ speech and expression, with serious consequences for constitutional rights to free speech and expression which also protect ‘the rights of individuals to listen, read and receive the said speech‘ (Tata Press Ltd v. Mahanagar Telephone Nigam Ltd, (1995) 5 SCC 139). 

Such requirements appear to be aimed at creating more user-friendly networks of intermediaries. However, the imposition of a single set of requirements is especially onerous for smaller or volunteer-run intermediary platforms which may not have income streams or staff to provide for such a mechanism. Indeed, the petition in the Praveen Arimbrathodiyil matter has challenged certain of these requirements as being a threat to the future of the volunteer-led Free and Open Source Software (FOSS) movement in India, by placing similar requirements on small FOSS initiatives as on large proprietary Big Tech intermediaries.  

Other obligations that stipulate turn-around times for intermediaries include (i) a requirement to remove or disable access to content within 36 hours of receipt of a Government or court order relating the unlawful information on the intermediary’s computer resources (under Rule 3(1)(d)) and (ii) to provide information within 72 hours of receiving an order from a authorised Government agency undertaking investigative activity (under Rule 3(1)(j). 

Similar to the concerns with automated screening, there are concerns that the new grievance process could lead to private entities becoming the arbiters of appropriate content/ free speech — a position that was specifically reversed in a seminal 2015 Supreme Court decision that clarified that a Government or Court order was needed for content-takedowns.  

3. Key issues for the new ‘publishers’ subject to the Rules, including OTT players

3.1 New Codes of Ethics and three-tier redress and oversight system for digital news media and OTT players 

Digital news media and OTT players have been designated as ‘publishers of news and current affairs content’ and ‘publishers of online curated content’ respectively in Part III of the Rules. Each category has been then subjected to separate Codes of Ethics. In the case of digital news media, the Codes applicable to the newspapers and cable television have been applied. For OTT players, the Appendix sets out principles regarding content that can be created and display classifications. To enforce these codes and to address grievances from the public on their content, publishers are now mandated to set up a grievance system which will be the first tier of a three-tier “appellate” system culminating in an oversight mechanism by the Central Government with extensive powers of sanction.  

At least five legal challenges have been filed in various High Courts challenging the competence and authority of the Ministry of Electronics & Information Technology (MeitY) to pass the Rules and their validity namely (i) in the Kerala High Court, LiveLaw Media Private Limited vs Union of India WP(C) 6272/2021; in the Delhi High Court, three petitions tagged together being (ii) Foundation for Independent Journalism vs Union of India WP(C) 3125/2021, (iii) Quint Digital Media Limited vs Union of India WP(C)11097/2021, and (iv) Sanjay Kumar Singh vs Union of India and others WP(C) 3483/2021, and (v) in the Karnataka High Court, Truth Pro Foundation of India vs Union of India and others, W.P. 6491/2021. This is in addition to a fresh petition filed on 10 June 2021, in TM Krishna vs Union of India that is challenging the entirety of the Rules (both Part II and III) on the basis that they violate rights of free speech (in Article 19 of the Constitution), privacy (including in Article 21 of the Constitution) and that it fails the test of arbitrariness (under Article 14) as it is manifestly arbitrary and falls foul of principles of delegation of powers. 

Some of the key issues emerging from these Rules in Part III and the challenges to them are highlighted below. 

3.2 Lack of legal authority and competence to create these Rules

There has been substantial debate on the lack of clarity regarding the legal authority of the Ministry of Electronics & Information Technology (MeitY) under the IT Act. These concerns arise at various levels. 

First, there is a concern that Level I & II result in a privatisation of adjudications relating to free speech and expression of creative content producers – which would otherwise be litigated in Courts and Tribunals as matters of free speech. As noted by many (including the LiveLaw petition at page 33), this could have the effect of overturning judicial precedent in Shreya Singhal v. Union of India ((2013) 12 S.C.C. 73) that specifically read down s 79 of the IT Act  to avoid a situation where private entities were the arbiters determining the legitimacy of takedown orders.  Second, despite referring to “self-regulation” this system is subject to executive oversight (unlike the existing models for offline newspapers and broadcasting).

The Inter-Departmental Committee is entirely composed of Central Government bureaucrats, and it may review complaints through the three-tier system or referred directly by the Ministry following which it can deploy a range of sanctions from warnings, to mandating apologies, to deleting, modifying or blocking content. This also raises the question of whether this Committee meets the legal requirements for any administrative body undertaking a ‘quasi-judicial’ function, especially one that may adjudicate on matters of rights relating to free speech and privacy. Finally, while the objective of creating some standards and codes for such content creators may be laudable it is unclear whether such an extensive oversight mechanism with powers of sanction on online publishers can be validly created under the rubric of intermediary liability provisions.  

4. New powers to delete, modify or block information for public access 

As described at the start of this blog, the Rules add new powers for the deletion, modification and blocking of content from intermediaries and publishers. While section 69A of the IT Act (and Rules thereunder) do include blocking powers for Government, they only exist vis a vis intermediaries. Rule 15 also expands this power to ‘publishers’. It also provides a new avenue for such orders to intermediaries, outside of the existing rules for blocking information under the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009

More grave concerns arise from Rule 16 which allows for the passing of emergency orders for blocking information, including without giving an opportunity of hearing for publishers or intermediaries. There is a provision for such an order to be reviewed by the Inter-Departmental Committee within 2 days of its issue. 

Both Rule 15 and 16 apply to all entities contemplated in the Rules. Accordingly, they greatly expand executive power and oversight over digital media services in India, including social media, digital news media and OTT on-demand services. 

5. Conclusions and future implications

The new Rules in India have opened up deep questions for online intermediaries and providers of digital media services serving the Indian market. 

For intermediaries, this creates a difficult and even existential choice: the requirements, (especially relating to traceability and automated screening) appear to set an improbably high bar given the reality of their technical systems. However, failure to comply will result in not only the loss of a safe harbour from liability — but as seen in new Rule 7, also opens them up to punishment under the IT Act and criminal law in India. 

For digital news and OTT players, the consequences of non-compliance and the level of enforcement remain to be understood, especially given open questions regarding the validity of legal basis to create these rules. Given the numerous petitions filed against these Rules, there is also substantial uncertainty now regarding the future although the Rules themselves have the full force of law at present. 

Overall, it does appear that attempts to create a ‘digital media’ watchdog would be better dealt with in a standalone legislation, potentially sponsored by the Ministry of Information and Broadcasting (MIB) which has the traditional remit over such areas. Indeed, the administration of Part III of the Rules has been delegated by MeitY to MIB pointing to the genuine split in competence between these Ministries.  

Finally, the potential overlaps with India’s proposed Personal Data Protection Bill (if passed) also create tensions in the future. It remains to be seen if the provisions on traceability will survive the test of constitutional validity set out in India’s privacy judgement (Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1). Irrespective of this determination, the Rules appear to have some dissonance with the data retention and data minimisation requirements seen in the last draft of the Personal Data Protection Bill, not to mention other obligations relating to Privacy by Design and data security safeguards. Interestingly, despite the Bill’s release in December 2019, a definition for ‘social media intermediary’ that it included in an explanatory clause to its section 26(4) closely track the definition in Rule 2(w), but also departs from it by carving out certain intermediaries from the definition. This is already resulting in moves such as Google’s plea on 2 June 2021 in the Delhi High Court asking for protection from being declared a social media intermediary. 

These new Rules have exhumed the inherent tensions that exist within the realm of digital regulation between goals of the freedom of speech and expression, and the right to privacy and competing governance objectives of law enforcement (such as limiting the circulation of violent, harmful or criminal content online) and national security. The ultimate legal effect of these Rules will be determined as much by the outcome of the various petitions challenging their validity, as by the enforcement challenges raised by casting such a wide net that covers millions of users and thousands of entities, who are all engaged in creating India’s growing digital public sphere.

Photo credit: Gerd Altmann from Pixabay

Read more Global Privacy thought leadership:

South Korea: The First Case where the Personal Information Protection Act was Applied to an AI System

China: New Draft Car Privacy and Security Regulation is Open for Public Consultation

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

New FPF Report Highlights Privacy Tech Sector Evolving from Compliance Tools to Platforms for Risk Management and Data Utilization

As we enter the third phase of development of the privacy tech market, purchasers are demanding more integrated solutions, product offerings are more comprehensive, and startup valuations are higher than ever, according to a new report from the Future of Privacy Forum and Privacy Tech Alliance. These factors are leading to companies providing a wider range of services, acting as risk management platforms, and focusing on support of business outcomes.

“The privacy tech sector is at an inflection point, as its offerings have expanded beyond assisting with regulatory compliance,” said FPF CEO Jules Polonetsky. “Increasingly, companies want privacy tech to help businesses maximize the utility of data while managing ethics and data protection compliance.”

According to the report, “Privacy Tech’s Third Generation: A Review of the Emerging Privacy Tech Sector,” regulations are often the biggest driver for buyers’ initial privacy tech purchases. Organizations also are deploying tools to mitigate potential harms from the use of data. However, buyers serving global markets increasingly need privacy tech that offers data availability and control and supports its utility, in addition to regulatory compliance. 

The report finds the COVID-19 pandemic has accelerated global marketplace adoption of privacy tech as dependence on digital technologies grows. Privacy is becoming a competitive differentiator in some sectors, and TechCrunch reports that 200+ privacy startups have together raised more than $3.5 billion over hundreds of individual rounds of funding. 

“The customers buying privacy-enhancing tech used to be primarily Chief Privacy Officers,” said report lead author Tim Sparapani. “Now it’s also Chief Marketing Officers, Chief Data Scientists, and Strategy Officers who value the insights they can glean from de-identified customer data.”

The report highlights five trends in the privacy enhancing tech market:

The report also draws seven implications for competition in the market:

The report makes a series of recommendations, including that the industry define as a priority a common vernacular for privacy tech; set standards for technologies in the “privacy stack” such as differential privacy, homomorphic encryption, and federated learning; and explore the needs of companies for privacy tech based upon their size, sector, and structure. It calls on vendors to recognize the need to provide adequate support to customers to increase uptake and speed time from contract signing to successful integration.

The Future of Privacy Forum launched the Privacy Tech Alliance (PTA) as a global initiative with a mission to define, enhance and promote the market for privacy technologies. The PTA brings together innovators in privacy tech with customers and key stakeholders.

Members of the PTA Advisory Board, which includes Anonos, BigID, D-ID, Duality, Ethyca, Immuta, OneTrust, Privacy Analytics, Privitar, SAP, Truata, TrustArc, Wirewheel, and ZL Tech, have formed a working group to address impediments to growth identified in the report. The PTA working group will define a common vernacular and typology for privacy tech as a priority project with chief privacy officers and other industry leaders who are members of FPF. Other work will seek to develop common definitions and standards for privacy-enhancing technologies such as differential privacy, homomorphic encryption, and federated learning and identify emerging trends for venture capitalists and other equity investors in this space. Privacy Tech companies can apply to join the PTA by emailing [email protected].


Perspectives on the Privacy Tech Market

Quotes from Members of the Privacy Tech Alliance Advisory Board on the Release of the “Privacy Tech’s Third Generation” Report

anonos feature image 1

“The ‘Privacy Tech Stack’ outlined by the FPF is a great way for organizations to view their obligations and opportunities to assess and reconcile business and privacy objectives. The Schrems II decision by the Court of Justice of the European Union highlights that skipping the second ‘Process’ layer can result in desired ‘Outcomes’ in the third layer (e.g., cloud processing of, or remote access to, cleartext data) being unlawful – despite their global popularity – without adequate risk management controls for decentralized processing.” — Gary LaFever, CEO & General Counsel, Anonos

bigid 1

“As a founding member of this global initiative, we are excited by the conclusions drawn from this foundational report – we’ve seen parallels in our customer base, from needing an enterprise-wide solution to the rich opportunity for collaboration and integration. The privacy tech sector continues to mature as does the imperative for organizations of all sizes to achieve compliance in light of the increasingly complicated data protection landscape.’’—Heather Federman, VP Privacy and Policy at BigID

logo

“There is no doubt of the massive importance of the privacy sector, an area which is experiencing huge growth. We couldn’t be more proud to be part of the Privacy Tech Alliance Advisory Board and absolutely support the work they are doing to create alignment in the industry and help it face the current set of challenges. In fact we are now working on a similar initiative in the synthetic media space to ensure that ethical considerations are at the forefront of that industry too.” — Gil Perry, Co-Founder & CEO, D-ID

dualitytechnologies

“We congratulate the Future of Privacy Forum and the Privacy Tech Alliance on the publication of this highly comprehensive study, which analyzes key trends within the rapidly expanding privacy tech sector. Enterprises today are increasingly reliant on privacy tech, not only as a means of ensuring regulatory compliance but also in order to drive business value by facilitating secure collaborations on their valuable and often sensitive data. We are proud to be part of the PTA Advisory Board, and look forward to contributing further to its efforts to educate the market on the importance of privacy-tech, the various tools available and their best utilization, ultimately removing barriers to successful deployments of privacy-tech by enterprises in all industry sectors” — Rina Shainski, Chairwoman, Co-founder, Duality

onetrustlogo

“Since the birth of the privacy tech sector, we’ve been helping companies find and understand the data they have, compare it against applicable global laws and regulations, and remediate any gaps in compliance. But as the industry continues to evolve, privacy tech also is helping show business value beyond just compliance. Companies are becoming more transparent, differentiating on ethics and ESG, and building businesses that differentiate on trust. The privacy tech industry is growing quickly because we’re able to show value for compliance as well as actionable business insights and valuable business outcomes.” — Kabir Barday, CEO, OneTrust

pa logo iqvia

“Leading organizations realize that to be truly competitive in a rapidly evolving marketplace, they need to have a solid defensive footing. Turnkey privacy technologies enable them to move onto the offense by safely leveraging their data assets rapidly at scale.” — Luk Arbuckle, Chief Methodologist, Privacy Analytics

1024px sap logo.svg

“We appreciate FPF’s analysis of the privacy tech marketplace and we’re looking forward to further research, analysis, and educational efforts by the Privacy Tech Alliance. Customers and consumers alike will benefit from a shared understanding and common definitions for the elements of the privacy stack.” — Corinna Schulze, Director, EU Government Relations, Global Corporate Affairs, SAP

unknown

“The report shines a light on the evolving sophistication of the privacy tech market and the critical need for businesses to harness emerging technologies that can tackle the multitude of operational challenges presented by the big data economy. Businesses are no longer simply turning to privacy tech vendors to overcome complexities with compliance and regulation; they are now mapping out ROI-focused data strategies that view privacy as a key commercial differentiator. In terms of market maturity, the report highlights a need to overcome ambiguities surrounding new privacy tech terminology, as well as discrepancies in the mapping of technical capabilities to actual business needs. Moving forward, the advantage will sit with those who can offer the right blend of technical and legal expertise to provide the privacy stack assurances and safeguards that buyers are seeking – from a risk, deployment and speed-to-value perspective. It’s worth noting that the growing importance of data privacy to businesses sits in direct correlation with the growing importance of data privacy to consumers. Trūata’s Global Consumer State of Mind Report 2021 found that 62% of global consumers would feel more reassured and would be more likely to spend with companies if they were officially certified to a data privacy standard. Therefore, in order to manage big data in a privacy-conscious world, the opportunity lies with responsive businesses that move with agility and understand the return on privacy investment. The shift from manual, restrictive data processes towards hyper automation and privacy-enhancing computation is where the competitive advantage can be gained and long-term consumer loyalty—and trust— can be retained.” — Aoife Sexton, Chief Privacy Officer and Chief of Product Innovation, Trūata

unknown 1

“As early pioneers in this space, we’ve had a unique lens on the evolving challenges organizations have faced in trying to integrate technology solutions to address dynamic, changing privacy issues in their organizations, and we believe the Privacy Technology Stack introduced in this report will drive better organizational decision-making related to how technology can be used to sustainably address the relationships among the data, processes, and outcomes.” — Chris Babel, CEO, TrustArc

wirewheel logo

“It’s important for companies that use data to do so ethically and in compliance with the law, but those are not the only reasons why the privacy tech sector is booming. In fact, companies with exceptional privacy operations gain a competitive advantage, strengthen customer relationships, and accelerate sales.” — Justin Antonipillai, Founder & CEO, Wirewheel

Colorado Privacy Act Passes Legislature: Growing Inconsistencies Ramp Up Pressure for Federal Privacy Law

Today, the Colorado Senate approved the House version of the Colorado Privacy Act (SB21-190) that passed yesterday, on June 7. If approved by Governor Jared Polis, Colorado will follow Virginia and California as the third U.S. state to establish baseline legal protections for consumer privacy.

“Although the Colorado Privacy Act contains notable advances that build on California and Virginia — in particular, formalizing a global privacy control, and applying to non-profit organizations — there continues to be an urgent need for Congress to set federal standards that create baseline nationwide protections for all.”

Statement by Polly Sanderson, Policy Counsel, Future of Privacy Forum

Colorado’s law features elements of both Virginia and California’s consumer privacy laws, as well as some elements unique to Colorado. The law is the first in the U.S. to apply to non-profit entities in addition to commercial entities. It contains a strong consent standard to process personal data for incompatible secondary uses and to process sensitive data such as health information, race, ethnicity, and other sensitive categories. The bill prohibits controllers from employing so-called “dark patterns” to obtain consent and allows consumers to exercise their opt-out rights via authorized agents. Consumers will be able to express their intent to opt-out of sales and targeted advertising via a universal opt-out mechanism established by the Colorado Attorney General, who is also granted authority to issue opinion letters and interpretive guidance on what constitutes a violation of the Act. 

Similar to Virginia’s recently passed Consumer Data Protection Act, Colorado’s law requires controllers to conduct data protection assessments for processing activities that present a heightened risk of harm to a consumer. This, along with FIPPs-inspired data minimization and purpose specification provisions, promotes organizational accountability and moves beyond a notice and consent framework. By excluding de-identified data from the scope of personal data and excluding pseudonymous data from the rights of access, correction, deletion, and portability, the law follows existing standards and incentivizes covered entities to maintain data in less identifiable formats. 

As a growing number of states begin to pass their own consumer privacy laws, concerns about interoperability may begin to emerge. For instance, definitional differences regarding what constitutes sensitive data, pseudonymous data, and biometric data may present operational challenges for businesses. Similarly, the scope of access, deletion, and other consumer rights differ between Colorado, Virginia, and California, creating potential implementation challenges. Finally, the research exemptions of each of these laws differ in their flexibility, consent, and oversight requirements.

Media Inquiries: Polly Sanderson, Senior Counsel at [email protected]

Privacy Trends: Four State Bills to Watch that Diverge from California and Washington Models

During 2021, state lawmakers have proposed a range of models to regulate consumer privacy and data protection. 

As the first state to pass consumer privacy legislation in 2018, California established a highly influential model with the California Consumer Privacy Act. In the years since, other states have introduced dozens of nearly identical CCPA-like state bills. In 2019, the Washington Privacy Act became an alternative model, which also saw large numbers of nearly identical WPA-like state bills introduced in other states throughout 2019-2021. In February, 2021, the passage of the Virginia Consumer Data Protection Act cemented the Washington model as an influential alternative framework. 

In 2021, however, numerous divergent frameworks have begun to emerge, with the potential to establish strong consumer protections, conflict with other states, and potentially influence federal privacy law. These proposals diverge from the California and Washington models in key ways, and are worth examining because of how they show ongoing cross-pollination, reveal concerns driving lawmakers about the inadequacy of notice and choice frameworks, and offer novel approaches for lawmakers and other stakeholders to discuss, debate, and consider. 

The California Model 

As the first state to enact consumer privacy legislation in 2018, California has a distinct and highly influential model for consumer privacy law. Since the passage of the California Consumer Privacy Act (CCPA), a proliferation of state proposals have adopted a similar framework, scope, and terminology. This reflects a general desire among state legislators to provide their constituents with at least the same privacy rights as those afforded to Californians, but in 2018, many hadn’t yet conceptualized alternative frameworks of their own. 

California-style proposals adopt “business-service provider” terminology, focus on consumer-business relationships, and are characterized by their focus on providing consumers with greater transparency and control over their personal data. They feature a bundle of privacy rights, including the right for consumers to “opt-out” of sales (or sharing) of personal data, and require businesses to post “Do Not Sell” links on their website homepages. Often, California-style proposals also include provisions which aim to make it easier for consumers to exercise their opt-out rights, such as authorized agent and universal opt-out provisions.

Though none have passed into law, the California model has influenced many state proposals over the past three years, such as Alaska’s failed HB 159 / SB 116, Florida’s failed HB 969 / SB 1734, and New York Governor Cuomo’s failed Data Accountability and Transparency Act incorporated into Budget Legislation. Oklahoma’s failed HB 1602 also adopted a similar framework, though it would require businesses to obtain “opt-in” consent to sell personal data, rather than “opt-out.” 

The Washington Model 

The Washington Privacy Act (WPA – SB 5062), sponsored by Sen. Reuven Carlyle (D), recently failed for the third consecutive legislative session. However, in February, 2021, Virginia passed legislation which follows the general framework of the WPA. Virginia’s Consumer Data Protection Act (VA-CDPA) sponsored by Delegate Cliff Hayes (D) and Sen. David Marsden (D), will become effective on January 1, 2023. 

The framework includes (1) processor/controller terminology; (2) heightened privacy protections for sensitive data; (3) individual rights of access, deletion, correction, portability, and the right to opt-out of sales, targeted advertising, and profiling in the furtherance of legal (or similarly significant) decisions; (4) differential treatment of pseudonymous data, (5) data protection impact assessments for high risk data processing activities, (6) flexibility for research, and (7) enforcement by the Attorney General. 

Numerous other active state bills adopt this framework, such as Colorado SB21-190, Connecticut SB 893, and failed proposals in Utah, Minnesota, and elsewhere. The Colorado and Connecticut proposals are both on the Senate floor calendars in their respective states. Of course, each WPA-type bill contains important differences. For instance, the Colorado and Connecticut proposals both broadly exclude pseudonymous data from all consumer rights, including opt-out rights. The Colorado proposal also features a global/universal opt-out provision for sales and targeted advertising, an opt-out standard for the processing of sensitive data (rather than opt-in), a prescriptive HIPAA de-identification standard (rather than the FTC’s 3-part test), and public research exemptions that do not incorporate provisions mandating oversight by an institutional review board (IRB) or a similar oversight body. 

Growing Divergence and Cross-Pollination 

In the three years since the passage of the CCPA, legislative divergence has increased as more and more states have convened task forces to study consumer privacy issues, and held hearings, roundtables, and 1-on-1’s with diverse experts from academia, the advocacy community, and industry. In other words, the laboratories of democracy have been experimenting — a trend which will likely continue in 2022 and beyond as legislators’ views on consumer privacy continue to become more sophisticated and nuanced. 

State bills in 2021, as compared to 2019-2020, are increasingly focused on bolstering notice and choice regimes (including a shift towards more “opt-in” rather than “opt-out” requirements), are borrowing more features from other laws (such as the GDPR’s “legitimate interests” framework), and in some cases experimenting with novel approaches (such as fiduciary duties, or “data streams”).   

For example, some state bills would require businesses to provide two-tiered short-form and long-form disclosures, and would authorize a government agency to develop a uniform logo or button to promote individual awareness of the short-form notice. Numerous proposals would generally require opt-in consent for all data processing, would prohibit manipulative user interfaces to obtain consent, and would designate user-enabled privacy controls as a valid mechanism for an individual to communicate their privacy preferences. Some proposals feature additional rights, such as the right not to be subject to “surreptitious surveillance,” the right not to be subject to a solely automated decision, and the right to object to processing. 

There is also a trend among proposals towards moving beyond a notice and choice framework, with the aim of moving the burden of privacy management away from individuals. For instance, many include strong purpose specification and data minimization requirements, and some include outright prohibitions on discriminatory data processing. At least one state (NJ A. 3283, discussed below) has taken inspiration from the EU’s General Data Protection Regulation (GDPR) by recognizing “legitimate interests” along with other lawful bases for data processing. 

Many proposals are taking novel or unique approaches to privacy legislation. For example, a Texas proposal leans towards conceptualizing personal data as property by enabling an individual to exchange their “data stream” as consideration for a contract with a business. Meanwhile, various proposals contain duties of loyalty, care, and confidentiality. These trust-based duties were first introduced into US legislation in 2018, when Sen. Brian Schatz (D-HI) introduced the Data Care Act (S. 2961). At that time, it wasn’t clear whether trust-based duties would become influential in the US. The fact that they have demonstrates the potential for cross-pollination between federal and state proposals. 

Four Notable Models to Watch 

Amidst such a large volume of California and Washington-like bills, it may be easy to miss the handful of states where legislators are taking a different approach to baseline or comprehensive privacy legislation. Even if they do not pass, these bills are worth examining because they could eventually influence federal privacy law. Additionally, they can provide insights into some of the most pressing concerns of policymakers, such as whether (and how) to regulate automated decision-making, including profiling? Whether a framework should be based on privacy self-management, relationships of trust, civil rights, or personal data as property? How personal data should be defined, and whether it should be subcategorized according to sensitivity, identifiability, source (first party, third party, derived) or something else? Answering these types of questions is not straightforward, and there are many reasonable philosophical positions for stakeholders to take. Close attention to legislative proposals can help to promote nuanced dialogue and debate about the relative merits and drawbacks of different approaches. 

Four active bills that are worth watching are (1) the New York Privacy Act (NYPA – S. 6701), (2) the New Jersey Disclosure and Accountability Transparency Act (NJ DaTA – A. 3283), (3) the Massachusetts Information Privacy Act (MIPA – S.46), and (4) Texas’s HB 3741

  1. New York Privacy Act (NYPA)

The New York Privacy Act (NYPA) (S. 6701) introduced by Sen. Kevin Thomas in May, 2021, has several distinctive features, such as an opt-in consent framework, duties of loyalty and care, heightened protections for certain types of consequential automated decision-making, and a data broker registry. The proposal passed out of the Consumer Protection Committee on May 18, and is now on the floor calendar. The legislature adjourns June 10. 

  1. New Jersey Disclosure and Accountability Transparency Act (NJ DaTA) 

The New Jersey Disclosure and Accountability Transparency Act (NJ DaTA – A. 3283) introduced by Assemblyman Andrew Zwicker (D) was heard before the Assembly Science, Innovation and Technology Committee on March 15, 2021. The legislature will remain in session through 2021. The framework includes six lawful bases for data processing, affirmative data processing duties, the right for an individual to object to processing, and heightened requirements surrounding automated decision-making. 

  1. Massachusetts Information Privacy Act (MIPA) 

The Massachusetts Information Privacy Act (MIPA – S.46) was introduced by Sen. Cynthia Stone Creem’s (D) in March, 2021. The legislature will remain in session through 2021. MIPA’s framework is based on a framework of notice and consent, with additional trust-based obligations for covered entities. Heightened protections arise for biometric data, location data, and “surreptitious surveillance” is prohibited. 

  1. Texas HB 3741

HB 3741, introduced by Rep. Capriglione (R), was referred to the Business & Industry Committee on Mar. 22. Texas’s legislative session is scheduled to end May 31, 2021. The proposal has numerous unique features. It would enable a consumer to provide their “data stream” as consideration under contract, it imposes different restrictions on three defined subcategories of personal data, and it would require opt-in consent for geotracking. In addition, businesses would be required to maintain accurate personal data.

South Korea: The First Case Where the Personal Information Protection Act was Applied to an AI System

As AI regulation is being considered in the European Union, privacy commissioners and data protection authorities around the world are starting to apply existing comprehensive data protection laws against AI systems and how they process personal information. On April 28th, the South Korean Personal Information Protection Commission (PIPC) imposed sanctions and a fine of KRW 103.3 million (USD 92,900) on ScatterLab, Inc., developer of the chatbot “Iruda,” for eight violations of the Personal Information Protection Act (PIPA). This is the first time PIPC sanctioned an AI technology company for indiscriminate personal information processing.

“Iruda” caused considerable controversy in South Korea in early January after complaints of the chatbot using vulgar and discriminatory racist, homophobic, and ableist language in conversations with users. The chatbot, which assumed the persona of a 20-year-old college student named “Iruda” (Lee Luda), attracted more than 750,000 users on Facebook Messenger less than a month after release. The media reports prompted PIPC to launch an official investigation on January 12th, soliciting input from industry, law, academia, and civil society groups on personal information processing and legal and technical perspectives on AI development and services.

PIPC’s investigation found that ScatterLab used KakaoTalk, a popular South Korean messaging app, messages collected by its apps “Text At” and “Science of Love” between February 2020 to January 2021 to develop and operate its AI chatbot “Iruda.” Around 9.4 billion KakaoTalk messages from 600,000 users were employed in training algorithms to develop the “Iruda” AI model, without any efforts by ScatterLab to delete or encrypt users’ personal information, including their names, mobile phone numbers, and addresses. Additionally, 100 million KakaoTalk messages from women in their twenties were added to the response database with “Iruda” programmed to select and respond with one of these messages.

With regards to ScatterLab employing users’ KakaoTalk messages to develop and operate “Iruda,” PIPC found that including a “New Service Development” clause in the terms to log into the apps “Text At” and “Science of Love” did not amount to user’s “explicit consent.” The description of “New Service Development” was determined to be insufficient for users to anticipate that their KakaoTalk messages would be used to develop and operate “Iruda.” Therefore, PIPC determined that ScatterLab processed the user’s personal information beyond the purpose of collection.

In addition, ScatterLab posted its AI models on the code sharing and collaboration platform Github from October 2019 to January 2021, which included 1,431 KakaoTalk messages revealing 22 names (excluding last names), 34 locations (excluding districts and neighborhoods), gender, and relationships (friends or romantic partners) of users. This was found to be in violation of PIPA Article 28-2(2) which states, “A personal information controller shall not include information that may be used to identify a certain individual when providing pseudonymized information to a third party.”

ScatterLab also faced accusations of collecting personal information of over 200,000 children under the age of 14 without parental consent in the development and operation of its app services, “Text At,” “Science of Love,” and “Iruda,” as its services did not require age verification prior to subscribing.

PIPC Chairman Jong-in Yoon highlighted the complexity of the case at hand and the reasons why extensive public consultation took place as part of the proceedings: “Even the experts did not agree so there was more intense debate than ever before and the ‘Iruda’ case was decided after very careful review.” He explained, “This case is meaningful in that it has made clear that companies are prohibited from indiscriminately using personal information collected for specific services without clearly informing and obtaining explicit consent from data subjects.” Chairman Yoon added, “We hope that the results of this case will guide AI technology companies in setting the right direction for the processing of personal information and provide an opportunity for companies to strengthen their self management and supervision.”

PIPC plans to be active in supporting compliant AI Systems

PIPC also stated that it seeks to help AI technology companies in improving their privacy capabilities by having AI developers and operators present a “Self-Checklist for Personal Information Protection of AI Services” on-site, as well as support on-site consulting. PIPC plans to actively support AI technology companies to develop AI and data-based industries while protecting people’s personal information.

ScatterLab responded to the decision, “We feel a heavy sense of social responsibility as an AI tech company regarding the necessity to engage in proper personal information processing in the course of developing related technologies and services,” and stated that, “Upon the PIPC’s decision, we will not only actively implement the corrective actions put forth by the PIPC but also work to comply with the law and industry guidelines related to personal information processing.”

Talking to Kids About Privacy: Advice from a Panel of International Experts

Now more than ever, as kids spend much of their lives online to learn, explore, play, and connect, it is essential to ensure their knowledge and understanding of online safety and privacy keeps pace. On May 13th, the Future of Privacy Forum and Common Sense assembled a panel of youth privacy experts from around the world for a webinar presentation on, “Talking to Kids about Privacy,” exploring both the importance of and approaches to talking to kids about privacy. Watch a recording of the webinar here.

The virtual discussion, moderated by FPF’s Amelia Vance and Jasmine Park, aimed to provide parents and educators with tools and resources to facilitate productive conversations with kids of all ages. The panelists were Rob Girling, Co-Founder of strategy and design firm Artefact, Sonia Livingstone, Professor of Social Psychology at the London School of Economics and Political Science (LSE), Kelly Mendoza, Vice President of Education Programs at Common Sense, Anna Morgan, Head of Legal and Deputy Commissioner of the Irish Data Protection Commission (DPC), and Daniel Solove, Professor of Law at George Washington University Law School and Founder of TeachPrivacy.

The first thing that parents and educators need to know? “Contrary to popular opinion, kids really care about their privacy in their personal lives, and especially now, in their digital lives,” shared panelist Sonia Livingstone. “When they understand how their data is being kept, shared, monetized and so forth, they are outraged.” To help inform youth, Livingstone curated an online toolkit with young people to answer frequently asked privacy questions that emerged from her research. 

And a close second: their views about privacy are closely shaped by their environment. “How children understand privacy is in some ways colored by the part of the world they come from and the culture and ideas about family and ideas about institutions that they can trust, and especially how far the digital world has already become something they rely upon,” Livingstone added.

Kelly Mendoza encouraged audience members to start having conversations about privacy with kids at a young age, and to get beyond the common but too simple advice to not share personal information online. Common Sense’s Digital Citizenship Curriculum provides free lesson plans to address timely topics and prepare students to take ownership of their digital lives by grade and topic. 

She also emphasized the important role that schools play in educating parents about privacy in her remarks. “It’s important that schools and educators and parents work together because really we’re finding that schools can play a really powerful role in educating parents,” Mendoza said. “Schools need to do a better job of communicating – what tools are they using? How are they rated and reviewed? What are the privacy risks? And why are they using this technology?” A useful starting point for schools and parents is Common Sense’s Family Engagement Resources Toolkit, which includes tips, activities, and other resources.  

Several panelists emphasized the critical role schools play in educating students about privacy. To do so effectively, schools engage and educate teachers to ensure they are informed and equipped to have meaningful conversations about privacy with their students. 

Anna Morgan provided a model for engaging children in informing data protection policies through classroom-based lesson plans. Recognizing that the General Data Protection Regulation (GDPR) and Data Protection Law are complex, the DPC provided teachers with a quick start guide to provide background knowledge, enabling them to engage in discussions with children about their data protection rights and entitlements. 

Privacy can be a difficult concept to explain, and there’s nothing quite like a creative demonstration to bring privacy concerns to life. One example: the DPC created a fictitious app to solicit children’s reactions to the use of their personal data. Through their consultation, Morgan shared that 60 percent of the children surveyed believed that their personal data should not be used to serve them with targeted advertising, finding it scary and creepy to have ads following them around. A full report from the consultation can be found here.

Daniel Solove also highlighted the need for educational systems to teach privacy. “Children today are growing up in a world where massive quantities of personal information are being gathered from them. They’re growing up in a world where they’re more under surveillance than any other generation. There’s more information about them online than any other generation. And the ability for them to put information online and get it out to the world is also unprecedented,” Solove noted. “So I think it’s very important that they learn about these things, and as a first step, they need to appreciate and understand the value of privacy and why it matters.”

One way for kids to learn about privacy is through storytelling. Solove recently authored a new children’s book about privacy titled, THE EYEMONGER, and shared his motivations for writing the book with the audience. “There really wasn’t anything out there that explained to children what privacy was, why we should care about it, or really any of the issues that are involved in this space, so that prompted me to try to do something about it.” He also compiled a list of resources to accompany the book and help educators and parents teach privacy to their children.

Building on the thread of creating outside-the-box interactive experiences to help kids understand privacy, Rob Girling shared with the audience a game called The Most Likely Machine, developed by Artefact Group to help preteens understand algorithms. Girling saw a need to teach algorithmic literacy given the impact on children’s lives, from determining college and job applications to search engine results. For Girling, “It’s just starting to introduce the idea that underneath algorithms are human biases and data that is often biased. That’s the key learning we want kids to take away.”

Each of the panelists shared a number of terrific resources and recommendations for parents and educators, which we have listed and linked to below, along with a few of our own.

Watch the webinar in full here, and we hope you will use and share some of the excellent resources referenced below.

Rob’s Recommended Resources

Sonia’s Recommended Resources

Kelly’s Recommended Resources

Anna’s Recommended Resources

Dan’s Recommended Resources

Additional Future of Privacy Forum Resources of note:

China: New Draft Car Privacy and Security Regulation is Open for Public Consultation

car 3075497 1920

by Chelsey Colbert

The author thanks Hunter Dorwart for his contribution to this text.

The Cyberspace Administration of China (CAC) released a draft regulation on car privacy and data security on May 12, 2021. China has been very active in automated vehicle development and deployment and has also proposed last fall a draft comprehensive privacy law, which is moving towards adoption likely by the end of this year.

The draft car privacy and data security regulation (“Several Provisions on the Management of Automobile Data Security”; hereinafter, “draft regulation”) is interesting for those tracking automated vehicle (AV) and privacy regulations around the world and is relevant beyond China – not only due to the size of the Chinese market and its potential impact on all actors in the “connected cars” space present there, but also because dedicated legislation for car privacy and data security is novel for most jurisdictions. In fact, the draft regulation raises several interesting privacy and data protection aspects worthy of further consideration, such as its strict rules on consent, privacy by design, and data localization requirements. The CAC is seeking public comment on the draft, and the deadline for comments is June 11, 2021. 

The draft regulation complements other regulatory developments around connected and automated vehicles and data. For example, on April 29, 2021, the National Information Security Standardization Technical Committee (TC 260), which is jointly administered by the CAC and the Standardization Administration of China, published a draft Standard on Information Security Technology Security Requirements for Data Collected by Connected Vehicles. The Standard sets forth security requirements for data collection to ensure compliance with other laws and facilitate a safe environment for networked vehicles. Standards like this are an essential component of corporate governance in China and notably fill in compliance gaps left in the law. 

The publication of the draft regulation and the draft standard indicate that the Chinese government is turning its attention towards the data and security practices of the connected cars industry. Below we explain the key aspects of this draft regulation, summarize some of the noteworthy provisions, and conclude with the key takeaways for everyone in the car ecosystem. 

Broad scope of covered entities: from OEMs to online ride-hailing companies

The draft regulation aims to strengthen the protection of “personal information” and “important data,” regulate data processing related to cars, and maintain national security and public interests. The scope of application of this draft regulation is fairly broad, both in terms of who it applies to and the types of data it covers. 

The draft regulation applies to “operators” that collect, analyze, store, transmit, query, utilize, delete, and provide (activities collectively referred to as processing) personal information or important information overseas (during the design, production, sales, operation, maintenance, and management of cars) and “within the territory of the People’s Republic of China.” 

“Operators” are entities that design or manufacture cars, or service institutions such as OEMs (original equipment manufacturers), component and software providers, dealers, maintenance organizations, online car-hailing companies, insurance companies, etc. (Note: The draft regulation includes “etc.,” here and throughout, which appears to mean that it is a non-exhaustive list.)

Covered data: Distinction among “personal information,” “important data,” and “sensitive personal information”

The draft regulation considers three data types, with an emphasis on “personal information” and “important data”, which are defined terms under Article 3. In addition, there is also a third type mentioned within the draft, at Article 8, and in a separate press release document: “sensitive personal information.”  

Personal information includes data from car owners, drivers, passengers, pedestrians, etc. (non-exhaustive list) and also includes information that can infer personal identity and describe personal behavior. This is a broad definition and is notable because it explicitly includes information about passengers and pedestrians. As the business models evolve and the ecosystem of players in the car space grows, it has become more important to consider individuals other than just the driver or registered user of the car. The draft regulation appears to use the words “users” and “personal information subjects” when referring to this group of individuals broadly and also uses “driver,” “owner,” and “passenger” throughout.

The second type of data covered is “important data,” which includes:

The inclusion of this data type is notable because it is defined in addition to “sensitive personal information” and includes data about users and infrastructure (i.e., the car charging network). Article 11 prescribes that when handling important data, operators should report to the provincial cyberspace administration and relevant departments the type, scale, scope, storage location and retention period, the purposes for collection, whether it was shared with a third party, etc. in advance (presumably in advance of processing this type of data, but this is something that may need to be clarified).

The third type of data mentioned in the draft regulation is “sensitive personal information,” and this includes vehicle location, driver or passenger audio and video, and data that can be used to determine illegal driving. There are certain obligations for operators processing this type of data (Articles 8 and 16).

Article 8 prescribes that where “sensitive personal information” is collected or provided outside of the vehicle, operators must meet certain obligations:

The definitions of these three types of data mirror similar definitions in other Chinese laws or draft laws currently being considered for adoption, such as the Civil Code and, respectively, the Personal Information Protection Law and the Cybersecurity Law. Consistency across these laws indicates a harmonization of China’s emerging data governance regulatory model. 

Obligations based on the Fair Information Practice Principles

Articles 4 – 10 include many of the fair information practice principles, such as purpose specification and data minimization in Article 4 and security safeguards in Article 5, as well as privacy by design (Articles 6(4), 6(5), and 9). There are a few notable provisions worth discussing in more detail which are organized under the following headings below: local processing, transparency and notice, consent and user control, biometric data, annual data security management, and violations and penalties. 

Local (“on device”) processing

Personal information and important data should be processed inside the vehicle, wherever possible (Article 6(1)). Where data processing outside of the car is necessary, operators should ensure the data has been anonymized wherever possible (Article 6(2)).

Transparency and Notice

When processing personal information, the operator is required to give notice of the types of data being collected and provide the contact information for the person responsible for processing user rights (Article 7). This notice can be provided through user manuals, onboard display panels, or other appropriate methods. The notice should include the purpose for collection, the moment that personal information is collected, how users can stop the collection, where and for how long data is stored, and how to delete data stored in the car and outside of the vehicle.

Regarding sensitive personal information (Article 8(3)), the operator is obliged to inform the driver and passengers that this data is being collected through a display panel or a voice in the car. This provision does not include “user manuals” as an example of how to provide notice, which potentially means that this data type is worthy of more active notice than personal information. This is notable because operators cannot rely on notice being given through a privacy notice placed on a website or in the car’s manual.

Consent and User Control, including a two-week deletion deadline

Article 9 requires operators to obtain consent to collect personal information, except where laws do not require consent. This provision notes that consent is often difficult to obtain (e.g., collecting audio and video of pedestrians outside the car). Because of this difficulty, data should only be collected when necessary and should be processed locally in the vehicle. Operators should also employ privacy by design measures, such as de-identification on devices.

Article 8(2) (requirements when collecting sensitive personal information) requires operators to obtain the driver’s consent and authorization each time the driver enters the car. Once the driver leaves the driver’s seat, that consent session has ended, and a new one must begin once the driver gets back into the seat. The driver must be able to stop the collection of this type of data at any time, be able to view and make inquiries about the data collected, and request the deletion of the data (the operator has two weeks to delete the data). It is worth noting that Article 8 includes six subsections, some of which appear to apply only to the driver or owner and not passengers or pedestrians. 

These consent and user control requirements are quite notable and would have a non-trivial impact on the design of the car, the user experience, as well as the internal operations of the operator. It could potentially impact the user experience negatively if consent and authorization were required each time the driver got into the driver’s seat. For example, a relevant comparable experience is using a website and facing the consent-related pop-ups that must be closed out before being able to read or use the website at every visit. Furthermore, stopping the collection of location data, video data, and other telematics data (if used to determine illegal driving) could also present safety and functionality risks and cause the car not to operate as intended or safely. These are some of the areas where stakeholders are expected to submit comments for the public consultation. 

Biometric data

Biometric data is mentioned throughout the draft regulation, as this type of data is implicitly or explicitly included in the definitions of personal information, important data, and sensitive personal information. Biometric data is specifically mentioned in Article 10, which is about the biometric data of drivers. Biometric data is an increasingly common data type collected by cars and deserves special attention. Article 10 would require that the biometric data of the driver (e.g., fingerprints, voiceprints, faces, heart rhythms, etc.) only be collected for the convenience of the user or to increase the security of the vehicle. Operators should also provide alternatives to biometrics. 

Data localization

Articles 12-15 and 18 concern data localization. Both personal information and important data should be stored within China, but if it is necessary to store elsewhere, the operator must complete an “outbound security assessment” through the State Cyberspace Administration, and the operator is permitted to send only the data specified in that assessment overseas. The operator is also responsible for overseeing the overseas recipient’s use of the data to ensure appropriate security and for handling all user complaints. 

Annual data security management status

Article 17 places additional obligations on operators to report their annual data security management status to relevant authorities before December 15 of each year when:

  1. They process personal information of more than 100,000 users, or
  2. They process important data. 

Given that this draft regulation applies to passengers and pedestrians in addition to drivers, it would not take long for the threshold of 100,000 users to be met, especially for operators who manage a fleet of cars for rental or ride-hail. Additionally, since the definitions of personal information and important data are so broad, it is likely that many operators would trigger this reporting obligation. The obligations include recording the contact information of the person responsible for data security and handling user rights; recording relevant information about the scale and scope of data processing; recording with whom data is shared domestically; and other security conditions to be specified. If data is transferred overseas, there are additional obligations (Article 18). 

Violations and Penalties

Violation of the regulations would result in punishment in accordance with the “Network Security Law of the People’s Republic of China” and other laws and regulations. Operators may also be held criminally responsible. 

Conclusion 

China’s draft car privacy and security regulation provides relevant information for policymakers and others thinking carefully about privacy and data protection regarding cars. The draft regulation’s scope is very broad and includes many players in the mobility ecosystem beyond OEMs and suppliers (e.g., online car-hailing companies and insurance companies).

With regards to user rights, the draft regulation recognizes that other individuals, in addition to the driver, will have their personal information processed and provides data protection and user rights to these individuals (e.g., passengers and pedestrians). The draft regulation would apply to three broad categories of data (personal information, important data, and sensitive personal information).

In privacy and data protection laws from the EU to the US, we have continued to see different obligations arise depending on the type or sensitivity of data and how data is used. This underscores the need for organizations to have a complete data map; indeed, it is crucial that all operators in the connected and automated car ecosystem have a sound understanding of what data is being collected from which person and where that data is flowing. 

The draft regulation also highlights the importance of transparency and notice, as well as the challenges of consent and user control. It is a challenge to appropriately notify drivers, passengers, and pedestrians about all of the data types being collected by a vehicle.

Privacy and data protection laws will have a direct impact on the design, user experience, and even the enjoyment and safety of cars. It is crucial that all stakeholders are given the opportunity to provide feedback in the drafting of privacy and data protection laws that regulate data flows in the car ecosystem and that privacy professionals, engineers, and designers become much more comfortable working together to operationalize these rules. 

Image by Tayeb MEZAHDIA from Pixabay 

Check out other blogs in the Global Privacy series:

A New Era for Japanese Data Protection: 2020 Amendments to the APPI

The Right to Be Forgotten is Not Compatible with the Brazilian Constitution. Or is it?

India: Massive Overhaul of Digital Regulation with Strict Rules for Take-down of Illegal Content and Automated Scanning of Online Content