FPF Statement on President Biden’s 2023 State of the Union Address

“Data protection and privacy are fundamental human rights. The benefits of modern technology in areas like mobility, health care, and education cannot be fully realized until a clear, comprehensive federal privacy law is enacted. Americans will benefit from a standard that provides individuals with needed protections and organizations with certainty and guidance.

While privacy is important across society, its protections are increasingly central to marginalized communities, and the intersection of technology and civil rights is properly at the core of the Biden agenda. Children who are particularly vulnerable when their information is collected and used deserve protections against commercial exploitation. But it is essential that efforts to protect young people don’t require all adults to identify themselves before accessing online content, adversely affecting all internet users and giving a new tool to those who would limit access to content that they themselves object to based on ideology. 

Children’s privacy laws are important. But a comprehensive federal law to protect everyone is critical toward addressing the gaps in the current U.S. approach to data privacy, which has resulted in insufficient legal protections and a patchwork of state laws. Individuals deserve consistent privacy protections regardless of their zip code or age. We agree with President Biden: It’s time for federal lawmakers to speak in a united, bipartisan voice to create uniform privacy protections for all Americans.

We appreciate the recognition of the Biden Administration’s efforts to support comprehensive privacy protections at this critical juncture. Current business practices and new technologies are being shaped by laws worldwide, leaving the U.S. far behind other countries in regards to data protection and privacy.” 

 – Jules Polonetsky, CEO, Future of Privacy Forum

7 Tips For Protecting Your Privacy Online

Today, almost everything we do online involves companies collecting personal information about us. Personal data is collected and regularly used for a number of reasons – like when you use social media accounts, when you shop online or redeem digital coupons at the store, or when you search the internet. 

Sometimes, information is collected about you by one company and then shared or sold to another. While data collection can offer benefits to both you and businesses – like connecting with friends, getting directions, or sales promotions – it can also be used in ways that are intrusive – unless you take control.

There are many ways you can protect your personal data and information and control how it is shared and used. On this Data Privacy Day – recognized annually on January 28 to mark the anniversary of Convention 108, the first binding international treaty to protect personal data– the Future of Privacy Forum and other organizations are raising awareness and promoting best practices for data privacy. 

1. Check Your Privacy Settings

Many social media sites include options on how you can tailor your privacy settings to limit the ways data is collected or used.

Instagram allows you to manage a variety of privacy settings, including who has access to your posts, who can comment on or like your post, and manage what happens to posts after you delete them. You can view and change your settings here.

TikTok allows you to decide between public and private accounts, decide which accounts can view posted videos, and allows you to change your personalized ad settings. You can check your settings here.

Twitter allows you to manage if they share your information with third-party businesses if the site can track your internet browsing outside of Twitter, and allows you to choose if you’d like ads to be tailored to you. Check your settings here.

Facebook provides a range of privacy settings that can be found here.

In addition, you can check the privacy and security settings for other popular applications here.

What other apps do you use often? Check to see which settings they provide!

2. Limit Sharing of Location Data

Most social media sites will ask for access to your location data. Do they need it for some reason that is obvious, like helping you with directions or showing your nearby friends? Feel free to say no. And be aware that location data is often used to tailor ads and recommendations based on locations you have recently visited. Allowing access to location services may also permit the sharing of location information with third-parties. 

To check the location permissions allowed to social media sites on an iPhone or Android, follow the below steps.

iPhone

Android

3. Keep Your Devices & Apps Up to Date

Keeping software current and up to date is the only way to make sure that your device is protected against the latest software vulnerabilities. Having the latest security software, web browser, and operating system installed is the best way to protect against various online threats. By enabling automatic updates on your devices, you can be sure that your apps and operating system are always up to date. 

Users can check the status of their operating systems in the settings app. For iPhone users, navigate to “Software Update,” and for Android devices, look for the “Security” page in settings.

4. Use a Password Manager

Utilizing a strong and secure password for each web-based account you have helps ensure personal data and information are protected from unauthorized use. It can be difficult to remember complex passwords for every account and using a password manager can help. Password managers save passwords as you create and log in to your accounts, often alerting you of any duplicates and suggesting the creation of a stronger password. For example, if you use an Apple product when signing up for new accounts and services, you can allow your iPhone, Mac, or iPad to generate strong passwords and safely store them in iCloud Keychain for later access. Some of the best third-party password managers can be found here.

5. Enable Two-Factor Authentication

Two-factor authentication adds an additional layer of protection to your accounts. The first authentication is the normal username and password combination that has been used for years. The second factor is either a text message or email including a code that is sent to a personal device. This added step makes it harder for malicious actors to gain access to your accounts. Two-factor authentication only adds a few seconds to your day but can save you from the headache and harm that comes from compromised accounts. To be even safer, use an authenticator app as your second factor. 

As many of us continue to work and learn remotely, it’s important to stay aware of the information you share on and offline. Remember to adjust your settings regularly, staying on top of any privacy changes and updates made on the web applications you use daily. Take charge of protecting your personal data by being intentional about what you post online and encouraging others to look at the information they may be sharing. By adjusting your settings and making changes to your web accounts and devices, you can better maintain the security and privacy of your personal data.

6. Use End-to-End Encryption for Secure Messaging

Using applications with secure end-to-end encryption, such as Signal and ProtonMail, ensures that only you and the intended recipient are able to read your messages. Other applications such as WhatsApp and Telegram are also end-to-end encrypted, though be sure to update your settings in Telegram as messages are not encrypted by default.

As many of us share sensitive information with our families and friends, it’s critical to be mindful of how our personal information is shared and who has access to it. What better time to reassess our data practices and think about this important topic than during Data Privacy Day?

7. Turning off Personalized Ads

Tired of ads following your every move online? Take control by going into the settings of your applications. See below for how-to guides with quick, step-by-step instructions to turn off ad personalization for popular apps you may be using: 

If you’re interested in learning more about one of the topics discussed here or about other issues that are driving the future of privacy, sign up for our monthly briefing, check out one of our upcoming events, or follow us on Twitter and LinkedIn. FPF brings together some of the top minds in privacy to discuss how we can all benefit from the insights gained from data while respecting the individual right to privacy.

FPF Response to CFPB Data Portability Proposal

In response to the Consumer Financial Protection Bureau’s (CFPB) request for comment regarding data portability for financial products and services, the Future of Privacy Forum filed comprehensive comments and recommendations, urging the Bureau to craft balanced, informed privacy rules that protect individuals’ personal information while enhancing trust in the privacy and security of emerging data portability mechanisms in this space.

The request is an initial step for the CFPB, which is expected to launch an upcoming rulemaking on this issue under the Dodd-Frank Act later this year; the rulemaking will likely implicate practices such as peer-to-peer payments, tax filings, and wealth management services.

In our submission, FPF provides more than 20 specific recommendations to the CFPB, reflecting FPF’s expertise in this area and in the interplay of developing business practices and technology. Key recommendations include:

The CFPB’s regulatory activities are likely to clarify roles and obligations that will lead the way for advancements in data portability in the financial sector. Working together, clearer roles and policies can form simpler, more consistent, and safer consumer experiences. FPF looks forward to continued progress on these important topics, which can also advance thought leadership about data portability and open data across other industry sectors. 

Our comments are supported by over a year of meetings and outreach with leaders in banking, credit management, financial data aggregators, and solution providers to comprehensively understand the developing industry of open banking. In 2022, FPF organized an event on open banking with the Organization for Economic Co-Operation and Development (OECD), which was attended by important regulators and key industry players representing many jurisdictions. FPF distributed a paper at the event, Data Portability in Open Banking: Privacy and Other Cross-Cutting Issues, detailing how different jurisdictions’ laws impacted open banking activities and intersected with data protection law, including issues surrounding consent, security, and data subject portability rights.

This Year’s Must-Read Privacy Papers to be Honored at Capitol Hill Event

The Future of Privacy Forum’s 13th Annual Privacy Papers for Policymakers Award Recognizes Influential Privacy Research

Today, the Future of Privacy Forum (FPF) — a global non-profit focused on data protection headquartered in Washington, D.C. — announced the winners of its 13th annual Privacy Papers for Policymakers (PPPM) Awards.

The PPPM Awards recognize leading privacy scholarship that is relevant to policymakers in the U.S. Congress, at federal agencies and for international data protection authorities. Six winning papers, one honorable mention, a student submission and a student honorable mention were selected by a diverse group of leading academics, advocates and industry privacy professionals from FPF’s Advisory Board.

Award winners will have the unique opportunity to showcase their papers at the Privacy Papers for Policymakers ceremony on February 16, 2023 on Capitol Hill.

“Academic scholarship can serve as a valuable resource for policymakers,” said Jules Polonetsky, FPF’s CEO. “Now more than ever, topics such as discriminatory exclusion, punitive surveillance, algorithmic fairness, cross-border data flows, reproductive health and the enforcement of data protection rules are at the forefront of the privacy debate. These papers are ‘must-reads’ for legislators and data protection regulators grappling with data protection issues and enforcement.”

FPF’s 2023 Privacy Papers for Policymakers Award winners are:

In addition to the winning papers, FPF selected for Honorable Mention: “The art of data privacy, an excerpt from the book, Protecting Your Privacy in a Data-Driven World, by Claire McKay Bowen, Center on Labor, Human Services, and Population, The Urban Institute.

FPF also selected for the Student Paper Award, Caught in quicksand? Compliance and Legitimacy Challenges In Using Regulatory Sandboxes To Manage Emerging Technologies by Walter G. Johnson, a Ph.D. Scholar with the Australian National University School of Regulation and Global Governance (RegNet). A Student Paper Award Honorable Mention went to My Cookie is a phoenix: detection, measurement, and lawfulness of cookie respawning with browser fingerprinting by Nataliia Bielova, French National Institute for Research in Digital Science and Technology; Imane Fouad, Inria Centre at the University of Lille; Arnaud Legout, Inria Center of Côte d’Azur University; and Cristiana Santos Utrecht University School of Law.

In reviewing the submissions, these winning papers were awarded based on the strength of their research and proposed policy solutions for policymakers and regulators in the U.S. and abroad.

The Future of Manipulative Design Regulation

Regulators in the United States and around the globe are bringing enforcement actions and crafting rules intended to combat manipulative design practices online. These efforts are complex and address a range of consumer protection issues, including privacy and data protection risks. They raise thorny questions about how to distinguish between lawful designs that encourage individuals to consent to data practices, and unlawful designs that manipulate users through unfair and deceptive techniques. As policymakers enforce existing laws and propose new rules, it is crucial to identify when the design and default settings of online services constitute unlawful manipulative design that impairs user’s intentional decision-making. 

This post describes the current U.S. regulatory stance regarding manipulative design (also called deceptive design or “dark patterns”), highlighting major trends and takeaways. We start by reviewing how leading experts have distinguished between persuasive design and manipulative design. We then explain the most prominent rules that address manipulative design in the data protection context, as well as emerging proposals that seek to further regulate manipulative design. 

Recent laws and emerging proposals largely use a common approach to crafting these rules. Many focus on the role of design in consent flows and bar online services from acting, “to design, modify, or manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice.” Policymakers are particularly focused on analyzing the quality of organizations’ notices, the symmetry between consent and denial clickflows, the ease of cancellation or revocation of consent, and the presence of design features associated with compulsive usage. Similar but narrower rules also appear in laws and proposals that seek to protect specific vulnerable or marginalized populations, including young people. 

Finally, we discuss the opportunities and challenges in applying anti-manipulative design rules to specific business sectors or practices. While narrower approaches have been codified in some circumstances – including with regard to young users – other attempts to legislate in specific, narrow contexts remain mere proposals.

Introduction

Enforcement agencies are increasingly focused on manipulative design practices online. In 2021, the Federal Trade Commission (FTC or agency) convened a workshop regarding “digital dark patterns” and has since demonstrated a heightened interest in bringing enforcement actions that target manipulative designs. In December 2022, the agency announced a $245 million settlement with Epic Games arising from allegations that the gaming company used “dark patterns” in its popular Fortnight game franchise. The FTC complaint argues that Epic used “design tricks” to lead users to make accidental or unauthorized purchases, and blocked accounts when users sought refunds. The settlement follows prior enforcement actions that cited manipulative design as a key element of the Commission’s claim that online services deceived consumers in order to collect or use data. 

This increased scrutiny means that privacy practitioners are confronted with the difficult task of understanding, recognizing, and avoiding manipulative design. A central challenge is to distinguish between manipulative design and lawful persuasive practices. There are few straightforward answers. Experts often disagree about which design practices are prohibited by existing laws – including prohibitions against unfair and deceptive practices (“UDAP” laws) – as well as the optimal scope for proposed new regulatory efforts. 

What is Manipulative Design?

Organizations employ lawful, persuasive design every day when seeking to make a product or service look appealing or to truthfully inform consumers about the features of digital services and the available privacy options. Indeed, all design interfaces must constrain user autonomy to some extent in that they provide a limited number of choices within a mediated environment. In contrast, manipulative design tricks individuals or obfuscates material information. For example, manipulative design elements can steer individuals into unwittingly disclosing personal information, incurring unwanted charges, or compulsively using services. 

Researchers and advocates have proposed numerous approaches to classifying and categorizing manipulative design techniques. These approaches catalog forms of troubling user experience architecture ranging from the straightforwardly deceptive, such as “sneak into basket” (“[a]dding additional products to users’ shopping carts without their consent”), to the more ambiguous, such as “confirmshaming” (when organizations use emotion-laden language to pressure users to make purchases or maintain subscriptions). Often, illegal manipulative design prevents individuals from taking desired actions, such as canceling a subscription, or obscures information about the terms of an agreement.

The breadth and nuance of deceptive design taxonomies raises challenges for enforcing existing rules and drafting new regulations. It is difficult to treat the diverse range of conduct that falls under the umbrella category of manipulative design under a single regulatory framework. Furthermore, not all practices identified as “dark patterns” in taxonomies rise to a level of harm or deception that established consumer protection law prohibits or proposed legislation would bar. Ambiguity abounds when experts debate whether a particular practice is unlawful, including practices like delivering emotionally-worded messaging to users or establishing default settings that some individuals love but others loathe. This creates tricky problems for policymakers and practitioners seeking to distinguish between lawful and unlawful design techniques, and a dominant approach to analyzing these issues has yet to emerge. Still, some central themes are developing, as policymakers focus on factors such as the quality of notice, symmetry between consent and denial clickflows, ease of cancellation or revocation of consent, and the use of design features associated with compulsive usage as hallmarks of unlawful manipulative design. The next section provides an overview of contemporary regulatory treatment of manipulative design in the privacy context, and then explores emerging attempts to demarcate illegal manipulative design in draft bills.

Prohibitions on Manipulative Design in Privacy Laws

The following charts provide an overview of the treatment of manipulative design in state privacy laws and federal draft bills. Manipulative design has important implications for consumer privacy. User consent obtained through manipulation, because it is neither “informed” nor “freely-given,” undermines the basis of any notice and consent regime for data collection, use, and sharing. Platforms intending to employ manipulative design techniques can be incentivized to increase their data collection, so that they might use this data to determine what makes users particularly susceptible to specific sales pitches or invitations to opt-in. And deceptive design can be deployed to manipulate and extract data from users in ways that undermine their privacy interests. In light of these and other concerns, in recent years policymakers have begun to include prohibitions against the use of manipulative design in privacy legislation. Most often, these prohibitions take a narrow approach to manipulative design, focusing on the context of consent.

Federal Privacy Bills

Chart 1: Federal Privacy Bills

State Privacy Laws

Chart 2: State Privacy Laws

The vast majority of legislation that seeks to restrict or prohibit the use of manipulative design online in the U.S. is directly rooted in the Deceptive Experiences To Online Users Reduction Act (DETOUR) Act, first introduced in Congress in 2018 by Senators Warner (D-VA) and Fischer (R-NE), but never passed into law. The DETOUR Act would forbid websites, platforms, and services from acting “to design, modify, or manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.” While the DETOUR Act did not use the term “dark patterns,” variations and expansions of the bill’s language are replicated in more recent laws and bills that explicitly prohibit the use of “dark patterns” or define the elements of valid consent to exclude “dark patterns.” 

The following sections explore: 1) the influence of the DETOUR Act’s language, as well as the emergence of other approaches to restricting manipulative design practices in the U.S; 2) the emerging trend of regulators seeking to restrict the use of manipulative design beyond the consent context, such as to encourage users to share data or to make certain choices; and 3) laws and bills that seek to restrict the use of one form of manipulative design in particular, such as designs that encourage addiction or compulsive usage of social media, content streaming, and gaming platforms.

1. The Impact of the DETOUR Act’s Language (and Its Variations)

The above charts demonstrate a common trend in comprehensive privacy proposals: recycling the DETOUR Act’s language to prohibit the use of manipulative design in the context of consent. A number of state laws include a narrowed version of the DETOUR Act’s phrasing, leaving out the “or user data” clause, including the California Consumer Privacy Act (CCPA), the Colorado Privacy Act (CPA), and the Connecticut Data Privacy Act (CDPA) (see Chart 2). The leading comprehensive privacy bill in the 2022 Congress (The American Data Privacy and Protection Act, or ADPPA), includes DETOUR Act-like language to define valid consent (see Chart 1). The DETOUR Act’s language is becoming an American export as well: Recital 67 of the recently-passed EU Digital Services Act (DSA) prohibits the “providers of online platforms” from “deceiving or nudging recipients of the service and from distorting or impairing the autonomy, decision-making, or choice of the recipients of the service via the structure, design or functionalities of an online interface or a part thereof” (emphasis added). 

The proliferation of the narrowed version of DETOUR Act’s language in laws seeking to restrict different types of designs and conduct raises two major considerations for policymakers: 1) efforts to regulate manipulative design may be duplicative of existing laws, including laws that prohibit unfair and deceptive practices; and 2) applying the DETOUR Act’s language solely in the context of consent to data processing may fail to address some key harms that arise from manipulative design techniques.

First, many manipulative design categories and practices may already be prohibited under U.S. law. Specifically, as commentators have noted, numerous forms of manipulative design are likely illegal under federal and state UDAP laws. Examples include designs that make it impossible to unsubscribe from a service or that surreptitiously add items to a user’s online shopping cart before checkout. Recent FTC enforcement actions against companies that have trapped consumers in payment plans without the ability to easily unsubscribe or coerced them into sharing their sensitive data demonstrate that regulators have the legal authority to enforce against the use of manipulative design. Lawmakers should evaluate the extent to which regulators are empowered to take action against manipulative design practices under existing laws, and use their findings to inform the creation of any new regulatory regimes. 

Second, policymakers should examine the practical value of the application of the DETOUR Act’s language solely in the context of consent in comprehensive privacy regulation. While the major U.S. privacy regimes require user consent in different contexts, they typically establish the same standard for valid consent: “freely given, specific, informed, and unambiguous.” This is a high standard derived from European law, and it is not clear what an explicit statutory prohibition against manipulative design in obtaining consent meaningfully adds to these requirements. For example, the use of design elements that induce false beliefs or hide or delay disclosure of material information would seem to render “informed” and “unambiguous” consent impossible. At the same time, statutory language that exclusively targets consent and privacy does not squarely address harmful examples of manipulative design that have been identified outside those contexts, such as “obstruction” (“[m]aking it easy for the user to sign up for a service but hard to cancel it”) and deceptive “countdown timers” (falsely “[i]ndicating to users that a deal or discount will expire using a counting-down timer”). As they consider new rules, lawmakers should carefully consider the activity they seek to target with prohibitions on the use of manipulative design, along with as the most useful format and context for such requirements.

2. Legislation that Addresses Manipulative Design Beyond Consent

While the DETOUR Act’s language regarding manipulative design applies only to consent and data collection, and much of the legislation that adapts the DETOUR Act’s language narrows its scope even further to just the consent context, recently lawmakers have crafted manipulative design provisions that apply to other types of interfaces and conduct. For example, the California Age Appropriate Design Code (CA AADC) prohibits services targeted at young people from using “dark patterns” to steer youth into sharing personal data or to act in a way that a “business knows, or has reason to know, is materially detrimental to” a child’s physical or mental health or “well-being.” Depending on how the CA AADC’s “materially detrimental” standard is interpreted, the law’s language could be construed as applying to design features far beyond consent, such as algorithmically-selected content, music and video feeds, and other core features of child-directed services. 

Child and privacy advocates view robust regulation of manipulative design as a critical component of laws, such as the CA AADC, that aim to protect the rights of marginalized, multi-marginalized and vulnerable individuals, who may be less able to identify and protect themselves against deceptive features. For example, children, teenagers, the elderly, and non-native english speakers may be particularly susceptible to certain deceptive tactics, such as “trick questions” (“[u]sing confusing language to steer users into making certain choices”). These heightened risks suggest that stronger protections may be necessary for vulnerable individuals or in connection with services targeted at such users. Others have suggested protections for mobile-first users, preventing bad actors from deceiving individuals who access the internet on a mobile device by placing crucial information at the end of long disclosures or agreements.

If broad bans on manipulative design become the norm, however, policymakers should be careful to define the scope of those prohibitions with precision, and should be wary of creating compliance burdens for businesses that do not have corresponding benefits for consumers. Policymakers considering new restrictions on manipulative design outside the narrow consent context must examine whether applying the DETOUR Act’s prohibition on “dark patterns” could be overinclusive, extending to de minimis conduct or even beneficial practices that should not be restricted as a matter of public policy. All online design interfaces must ‘constrain’ user ‘autonomy’ to some extent in that they provide a restricted set of choices within a digital environment. While the DETOUR Act’s “substantial effect” standard provides an important limiting principle, the development of more specific language could help to ensure that regulators enforce against truly harmful design without creating overinclusive definitions.

3. Approaches that Focus on Particular Sectors or Practices

Another approach to regulating manipulative design has been to write bills targeting specific types of design viewed as particularly pernicious, such as those that encourage compulsive usage. For example, two recent federal bills, the Social Media Addiction Reduction Technology Act (‘SMART Act”), which would outlaw auto-refreshing content, music, and video feeds without “natural stopping points,” and S.1629, which would would regulate the use of loot boxes, which can encourage addictive, gambling-like behavior in gaming. The ‘New York child data privacy and protection act’ (S.B. 9563), would grant the New York attorney general the authority to “ban auto-play, push notifications, prompts, in-app purchases, or any other feature in an online product targeted towards child users that it deems to be designed to inappropriately amplify the level of engagement a child user has with such product.” Advocates recently petitioned the FTC to make rules “prohibiting the use of certain types of engagement-optimizing design practices” in digital services used by young people, including practices like “low-friction variable rewards,” “design features that make it difficult for minors to freely navigate or cease use of a website or service,” and “social manipulation design features.”

Unlike the CA AADC, which addresses deceptive design broadly, these proposals focus on manipulative design patterns that lead users to engage in compulsive usage of particular platforms and services. The tighter focus of these proposals may provide more clarity than bills that seek to regulate manipulative design more widely. However, bills of this nature have largely not been enacted, perhaps because the design features they propose to regulate are welcomed by some users and disliked by others. For example, many consumers seem to enjoy, and seek services that offer, auto-play and infinitely refreshing music or content feeds. Others object to these exact design elements, stating that these features induce or exacerbate compulsive behavior. These competing views may resolve as researchers continue to explore the addictive nature of certain online platforms and activities, as well as the potentially dire consequences of social media, gaming, and online gambling addictions. Risks and regulatory strategies may be different for adults as compared to children and adolescents.

While more narrow than other manipulative design limitations, the range of engagement-fostering designs targeted by bills like these is still quite broad, and, moving forward, policymakers must reckon with the difficulty of drawing clear lines between a platform trying to appeal to consumers and offer desired content recommendations versus attempting to encourage compulsive usage. The drafters of such bills should aim to be as clear as possible about the exact conduct they seek to prohibit, such as designs that trick children into unintentionally disclosing sensitive data or that have been reliably shown to cause financial or emotional harm. Bills targeting specific forms of manipulative design will likely be most effective when they specially forbid demonstrably harmful designs and provide clarity about the exact scope of prohibited conduct.

Conclusion

Increased U.S. legislative and enforcement activity regarding manipulative design is an emerging trend that is likely to continue, both inside and outside the context of privacy legislation and enforcement actions. However, there remains uncertainty regarding what forms of design are illegal under existing authority and the appropriate scope of enforcement. Thus far, the dominant trend has been for lawmakers to ban the use of manipulative design in obtaining consent for data collection and processing. However, several newer privacy laws and proposals address manipulative design both more broadly and more specifically, in ways that could have significant impacts on a range of digital services.

Regulated entities and user experience (UX) designers looking to the future can and should use long-standing legal and ethical concepts around fairness, fraud, and deception as touchstones to guide their design decisions. Still, the ongoing developments and uncertainty in the regulatory framework surrounding manipulative design mean that any entity operating a consumer-facing interactive interface must continue to pay close attention to legal developments, as well as to conversations among consumer advocates and the design community about forms of UX design that appear to harm individuals.

FPF in 2022: A Year in Review

As 2022 comes to an end, we wanted to reflect on a year that saw the Future of Privacy Forum (FPF) expand its presence both domestically and around the globe, while producing engaging events, thought-provoking analysis, and insightful publications.

Global Expansion

In 2022, FPF closely followed and advised upon significant developments in Asia, the European Union, Africa, and Latin America. We also discussed privacy and data protection with many of you at key conferences and events across the globe including in Washington, DC, Brussels, Singapore, Istanbul, Tel Aviv, and Rio de Janeiro.

FPF saw its presence in Asia continue to grow as the FPF Asia-Pacific office entered its second year. We introduced expert Josh Lee Kok Thong as Managing Director of FPF’s activities in the region. During FPF Asia-Pacific’s opening, we announced a partnership with the Asian Business Law Institute (ABLI) to support the convergence of data protection regulations and best privacy practices. As part of this partnership, FPF and ABLI published a joint series of comprehensive reports exploring the role and limits of consent in data protection laws and regulations of 14 jurisdictions in the Asia-Pacific region. This months-long dissemination culminated in a detailed comparative report on the requirements for processing personal data and was launched alongside the 58th Asia-Pacific Privacy Authorities Forum hosted by the Personal Data Protection Commission of Singapore. 

FPF remains consistently active in the European Union, with several engaging events bringing together the European data privacy community and numerous thought-provoking blogs, reports, and analyses published in 2022. FPF launched its comprehensive report analyzing case-law under the General Data Protection Regulation (GDPR) applied to cases involving automated decision-making and hosted its 6th Annual Brussels Privacy Symposium with the Brussels Privacy Hub of Vrije Universiteit Brussel, an event that officially launched the joint International Observatory on Vulnerable People in Data Protection

This year, FPF further expanded its global scope to Argentina and Japan with the appointment of two new Senior Fellows, Pablo Palazzi and Takeshige Sugimoto, and to Africa with the inclusion of Policy Analyst, Mercy King’ori, joining the team. We published several thought-provoking reports exploring DPA strategies in Africa, data localization in China, and developments in open banking through a global perspective and launched our popular infographics in Chinese to reach an even larger audience. In addition, our global experts provided analysis on privacy and data protection developments in India, Argentina, Indonesia, and Kenya.

US Legislative Activity

FPF continues to convene industry experts, academics, consumer advocates, and other experts to explore the challenging issues in the data protection and privacy field. Our experts have testified in front of state and national legislative bodies to provide analysis surrounding potential privacy legislation.

2022 saw the introduction of the bipartisan American Data Privacy and Protection Act (ADPPA), with FPF’s Bertram Lee testifying in front of the US House Energy and Commerce Subcommittee on Consumer Protection and Commerce supporting Congress’ efforts on the legislation. Experts Bertram Lee and Stacey Gray discussed ADPPA in the context of civil rights and in relation to California’s laws in The Hill and Lawfare editorials respectively. In addition to federal privacy legislation, 2022 also saw the introduction of consumer privacy laws in Utah and Connecticut, joining California, Virginia, and Colorado.

Last month, FPF urged the Federal Trade Commission to prioritize practical rules that clearly define individuals’ rights and companies’ responsibilities in our filed comments to the Commission’s Advance Notice of Proposed Rulemaking. In addition, FPF has submitted written comments regarding draft Colorado and California regulations and with the White House Office of Science and Technology Policy.

For the 12th year in a row, FPF recognized leading privacy research and analytical work with the virtual Privacy Papers for Policymakers Award. The winners spoke about their research in front of an audience of academic, industry, and policy professionals in the field. The event featured keynote speaker Colorado Attorney General Phil Weiser who spoke on his approach to fostering conversations in the privacy space that bring together policymakers and academics while ensuring the integrity of the discussions.

Youth & Education Privacy

Federal and state policymakers turned to the protection of children online this year with President Biden notably mentioning it during his State of the Union address. In California, the Age-Appropriate Design Code Act was signed into law. Our Youth & Education experts provided analysis by launching two policy briefs; one of which provides a comprehensive analysis of the law, and the second comparing the California legislation to the United Kingdom’s Age-Appropriate Design Code.

FPF also made a key hire in David Sallay as our Youth & Education Privacy Director. David comes to FPF from the Utah State Board of Education, where he previously served as the Chief Privacy Officer and the Student Privacy Auditor at the Utah State Board of Education, where he worked with schools and districts on implementing Utah’s state student privacy law.

Research Data Sharing and Emerging Technologies

As stakeholders and policymakers become increasingly interested in corporate and academic research data sharing, FPF released “The Playbook: Data Sharing for Research” as a resource laying out best practices for instituting research data-sharing programs between corporations and research institutions. Alongside this, we produced an infographic focused on the benefits, challenges, and opportunities for data sharing in research, including its key players and next steps. We also launched the Ethics and Data Sharing in Research Working Group to receive late-breaking analysis of emerging US legislation affecting research and data.

In 2022, we focused on emerging technologies by producing an infographic highlighting extended reality (XR) technologies, including virtual (VR), mixed (MR), and augmented (AR) reality, by visualizing how XR data flows work and exploring several use cases that these technologies may support. We also introduced a Brain-Computer Interfaces (BCIs) blog series building on our earlier report with IBM, analyzing BCIs in the context of healthcare and commercial and government use.

The FPF team welcomed many new faces in 2022 building our expertise in several key areas and regions of the world. In addition to Josh Lee Kok Thong as Managing Director of APAC and David Sallay as Director of Youth & Education Privacy, we welcomed Samuel Adams (Ad Tech), Lauren Anderson (Membership), Maria Badillo (Latin America, FPF Fellow), Jamie Gorosh (Youth & Education), Robin Grossfeld (Membership), Mercy King’ori (Africa), Bertram Lee (Artificial Intelligence), Aaron Massey (Ad Tech and Platforms), Dominique Matthews (Membership), Stefania Medrano (Operations), Akosua Osei (Events), Dominic Paulger (APAC), Isabella Perera (Global), Alyssa Rosinski (Business Development), Felicity Slater (US Legislation, FPF Fellow), Jameson Spivack (Immersive Tech), Chloe Suzman (US Legislation), Shea Swauger (Research and Ethics), Adonne Washington (Mobility and Location), Stephanie Wong (US Policy, FPF Fellow), and Jordan Wrigley (Health) to the team. In 2022, we also expanded our scope by introducing new Working Groups focusing on Biometrics, Immersive Technologies, and Youth Privacy. 

This is by no means a comprehensive list of all of FPF’s important and engaging work in 2022, but we hope it gives you a sense of our work’s impact on the privacy community and society at large. We believe our success is due to deep engagement with privacy experts in industry, academia, civil society, and government and our belief that collaborating across sectors and disciplines is needed to advance practical safeguards needed for data uses that benefit society.

Keep updated on FPF’s work by subscribing to our monthly briefing and following us on Twitter and LinkedIn

On behalf of the entire FPF team, we wish you a very Happy New Year!

Event Report: FPF APAC and ABLI Report Launch Event and Panel on sidelines of 58th Asia Pacific Privacy Authorities (APPA) Forum in Singapore

Edited by Josh Lee Kok Thong and Isabella Perera

On November 30, the Future of Privacy Forum (FPF) and the Asian Business Law Institute (ABLI) held a joint event to launch their new report, “Balancing Organizational Accountability and Privacy Self-Management in Asia-Pacific,” which provides a detailed comparison of the legal bases for processing personal data in 14 jurisdictions in the Asia-Pacific (APAC) region: Australia, China, India, Indonesia, Hong Kong SAR, Japan, Macau SAR, Malaysia, New Zealand, the Philippines, Singapore, South Korea, Thailand, and Vietnam. The report builds upon a series of 14 individual reports released throughout 2022 that provide an overview of the legal bases for processing personal data in each of these jurisdictions.

This launch event took place on the sidelines of the 58th APPA Forum, hosted by Singapore’s Personal Data Protection Commission (PDPC) between November 29 and 30. Many APPA members – which include privacy and data protection authorities from 18 jurisdictions in APAC and the broader Pacific region – as well as representatives from industry, civil society, and the legal community joined FPF and ABLI for this event.

The event began with introductory remarks by Dr. Gabriela Zanfir-Fortuna (Vice President for Global Privacy, FPF), Josh Lee Kok Thong (Managing Director, FPF APAC), and Rama Tiwari (Chief Executive, Singapore Academy of Law), as well as a brief presentation by Dominic Paulger (Policy Manager, FPF APAC) that outlined the scope and main finding of the report.

1670412539738

Photo Credit: Personal Data Protection Commission (PDPC)Silas See

These remarks were followed by a panel discussion that focused on key themes from the report and considered how to promote consistency and interoperability in legal bases for processing personal data around the APAC region while also ensuring the right balance between the interests of individuals, the organizations that process their personal data, and society at large.

The discussion was moderated by Yeong Zee Kin (Deputy Commissioner, PDPC), who was joined by four expert panelists: Dr. Clarisse Girot (Head of Data Governance and Privacy Unit, OECD); Leandro Angelo Y. Aguirre (Deputy Commissioner, National Privacy Commission, Philippines); Laura Gardner (Senior Counsel, Data Protection, Microsoft); and Rajesh Sreenivasan (Partner and Head of Technology, Media, and Telecommunications Practice, Rajah and Tann Singapore).

This post summarizes this exciting discussion and the key takeaways.

Role of consent

Moderator Yeong Zee Kin commenced the discussion by asking how regulators should think about the role of consent in the digital economy.

Dr. Clarisse Girot noted that due to advances in technology and changes in how organizations process personal data, consent has ceased to be meaningful. In her view, while consent plays an important role in data protection laws, it has been overused in the APAC region. In her view, this is because organizations that process personal data and practitioners have tended to regard consent as the easiest available option to comply with regional laws, especially if regulators have not seriously considered alternatives to consent. She suggested that overuse of consent could lead to “consent fatigue” for individuals.

Dr. Girot further noted that it would be appropriate to rely on consent in situations where individuals: (1) understand and can make a genuine decision about how their personal data will be used, (2) voluntarily provide their personal data to an organization, and (3) can withdraw their consent if necessary. However, she considered that such situations would likely be rare in practice. She, therefore, proposed that it may be necessary for regulators to ensure that their data protection laws contain legal bases besides consent to protect individuals from risks of harm.

In this regard, Dr. Girot highlighted the “legitimate interest” basis in European data protection law as a viable alternative basis to consent in situations where it is inappropriate for organizations to seek consent. She explained that because consent requirements are more strictly enforced in the European Union (EU), organizations in the EU tend to rely on legitimate interests (rather than consent) as a legal basis for processing data in most situations. However, she noted that there may be challenges to adopting such an approach in APAC as only a few jurisdictions in APAC currently recognize a legal basis for processing personal data premised on legitimate interests, and that other APAC jurisdictions are unlikely to enact reforms to recognize this basis in the near future.

Laura Gardner agreed that the processes involved in obtaining consent can overwhelm individuals and lead to “consent fatigue”. She added that even where individuals give valid consent, they may not make meaningful decisions as they may not always understand how their personal data will be used, especially if they rush to give consent in order to access a product or service as quickly as possible. In this regard, Ms. Gardner highlighted the importance of providing effective notice, using appropriate user interfaces, and providing the right level of information “just in time” to enable users to make meaningful and informed decisions about how their personal data is used.

Leandro Aguirre shared the National Privacy Commission (NPC)’s experience in implementing consent requirements in the Philippines’ data protection law, the Data Privacy Act of 2012 (DPA). He explained that although the DPA provides several alternative legal bases to consent for processing personal data (including legitimate interests), the NPC initially focused on consent because conceptually, it was easier to understand and appeared to give individuals control over how their personal data would be used.

Mr. Aguirre further clarified that consent and notice are distinct concepts under the DPA: if an organization relies on consent to process personal data under the DPA, the organization would be required to notify the data subject, obtain consent in a recorded manner, and ensure that the consent is freely given, informed, and specific, and that there is an indication of will on the part of the data subject. By contrast, if the organization relies on an alternative legal basis to consent in the DPA to process personal data, then the organization would only be required to notify the data subject.

However, he added that the NPC had realized that in practice, organizations were overusing consent and were passing the burden of validating and legitimizing the processing of personal data to the data subject, causing information overload and “consent fatigue.” Hence, the NPC has been working on a set of guidelines that aims to shift the idea of consent to just-in-time notices, which Mr. Aguirre hopes will encourage companies to rely on other legal bases, such as legitimate interests, to process personal data.

Promoting complementary alternatives to consent, like legitimate interests

Moving the discussion from consent to alternatives to consent, moderator Yeong Zee Kin shared a regulator’s perspective on alternatives to consent, focusing on the PDPC’s experiences of developing alternatives like legitimate interests in the 2020 amendments to Singapore’s Personal Data Protection Act 2012 (PDPA). He explained that when the PDPC first proposed including a legitimate interest basis in the PDPA, the PDPC was guided by the legislative purpose of the PDPA, which is to govern processing of personal data in a manner that recognizes both the right of individuals to protect their personal data and the need of organizations to process personal data for reasonable purposes.

Laura Gardner observed that a benefit of the legitimate interest basis, compared with consent, is that it builds in accountability in organizations. This is because the basis requires organizations to assess the benefits and risks of processing personal data to the individual and, if necessary, take steps to mitigate risks. Nevertheless, she also noted that a difficulty with this basis is that organizations may not feel as comfortable relying on it because they may be concerned that regulators may not agree with the organization’s assessment of the balance of interests. She stressed that regulators could help organizations gain greater familiarity with the legitimate interest basis by issuing clear guidance with specific examples of use cases on where the basis could be applied.

Leandro Aguirre emphasized that when relying on legitimate interests, companies are in a better position than the data subject to assess the impact of processing on the data subject as data subjects may not be able to understand everything provided to them. As for how the NPC regards the legitimate interest basis, Mr. Aguirre explained that three things must be considered: (1) organizations have to establish the existence of a legitimate interest; (2) the processing of personal data must be necessary for this legitimate interest; and (3) the legitimate interest must not override the fundamental human rights and freedoms of data subjects. He added that in the event of a violation, the NPC would only recommend prosecution if there were gross negligence on the part of the organization. This means that as long as an organization has accountability measures in place, it would not necessarily face an enforcement action from the NPC simply because the regulator does not agree with the organization’s legitimate interest assessment. He further added that dialogue between regulators and organizations is essential to increase clarity around the use of the legitimate interest and to ensure that organizations are comfortable relying on this legal basis when processing personal data.

Rajesh Sreenivasan shared the perspective of legal practitioners who advise organizations that process personal data. He explained that despite the existence of accountability-focused alternatives to consent like legitimate interests and business improvement exception in Singapore’s PDPA and other similar laws, many lawyers today would still advise their clients to rely on consent as practitioners may believe that it is easier to demonstrate and operationalize compliance with consent requirements (e.g., by producing a completed consent form). He added that practitioners and clients may also believe that accountability approaches to processing personal data would impose greater burdens on organizations.

Nonetheless, Mr. Sreenivasan observed that there are objective measures that can be used to demonstrate accountability, such as data protection impact assessments. He also noted that accountability-focused approaches like the legitimate interest basis may prove useful in situations where it is difficult to obtain meaningful consent, such as data analytics or artificial intelligence applications where data processing is so complex and dynamic that individuals may not be well-placed to understand how their personal data will be processed.

Mr. Sreenivasan also drew attention to Singapore’s decision in the 2020 amendments to the PDPA to create a legal basis for processing personal data, known as the “business improvement exception,” to address situations where the balance of interests is more strongly weighted in favor of businesses in developing products and services.

On a final note, Mr. Sreenivasan also stressed that regulators should not compel organizations to use consent, legitimate interests, or other alternative legal bases in specific situations. Instead, he suggested that regulators should permit organizations to make choices based on their own needs.

Consistency and interoperability of regional laws

Yeong Zee Kin explained that in amending Singapore’s PDPA, the PDPC also sought to facilitate cross-border compliance by ensuring that the PDPA had similar structures to those in the EU’s General Data Protection Regulation (GDPR) and other laws in APAC that had followed the GDPR’s example, such as a legitimate interest basis for processing personal data.

Clarisse Girot stressed that consent is still the main connecting point between jurisdictions in the APAC region. She suggested that even if individual jurisdictions promoted alternatives to consent, organizations that process personal data in multiple jurisdictions would likely only start incorporating those alternatives into their compliance frameworks if there was a “critical mass” of jurisdictions with similar alternatives. Dr. Girot thus encouraged regional regulators to come together to look for similar structures within their respective laws, and issue consistent guidance on alternatives to consent. 

img 2778

FPF Releases “The Playbook: Data Sharing for Research” Report and Infographic

Today, the Future of Privacy Forum (FPF) published “The Playbook: Data Sharing for Research,” a report on best practices for instituting research data-sharing programs between corporations and research institutions. FPF also developed a summary of recommendations from the full report.

Facilitating data sharing for research purposes between corporate data holders and academia can unlock new scientific insights and drive progress in public health, education, social science, and a myriad of other fields for the betterment of the broader society. Academic researchers use this data to consider consumer, commercial, and scientific questions at a scale they cannot reach using conventional research data-gathering techniques alone. This data also helped researchers answer questions on topics ranging from bias in targeted advertising and the influence of misinformation on election outcomes to early diagnosis of diseases through data collected by fitness and health apps.

The playbook addresses vital steps for data management, sharing, and program execution between companies and researchers. Creating a data-sharing ecosystem that positively advances scientific research requires a better understanding of the established risks, opportunities to address challenges, and the diverse stakeholders involved in data-sharing decisions. This report aims to encourage safe, responsible data-sharing between industries and researchers.

“Corporate data sharing connects companies with research institutions, by extension increasing the quantity and quality of research for social good,” said Shea Swauger, Senior Researcher for Data Sharing and Ethics. “This Playbook showcases the importance, and advantages, of having appropriate protocols in place to create safe and simple data sharing processes.”

In addition to the Playbook, FPF created a companion infographic summarizing the benefits, challenges, and opportunities of data sharing for research outlined in the larger report.

research data sharing infographic

As a longtime advocate for facilitating the privacy-protective sharing of data by industry to the research community, FPF is proud to have created this set of best practices for researchers, institutions, policymakers, and data-holding companies. In addition to the Playbook, the Future of Privacy Forum has also opened nominations for its annual Award for Research Data Stewardship.

“Our goal with these initiatives is to celebrate the successful research partnerships transforming how corporations and researchers interact with each other,” Swauger said. “Hopefully, we can continue to engage more audiences and encourage others to model their own programs with solid privacy safeguards.”

Shea Swauger, Senior Researcher for Data Sharing and Ethics, Future of privacy Forum

Established by FPF in 2020 with support from The Alfred P. Sloan Foundation, the Award for Research Data Stewardship recognizes excellence in the privacy-protective stewardship of corporate data shared with academic researchers. The call for nominations is open and closes on Tuesday, January 17, 2023. To submit a nomination, visit the FPF site.

FPF has also launched a newly formed Ethics and Data in Research Working Group; this group receives late-breaking analyses of emerging US legislation affecting research and data, meets to discuss the ethical and technological challenges of conducting research, and collaborates to create best practices to protect privacy, decrease risk, and increase data sharing for research, partnerships, and infrastructure. Learn more and join here

Driver Impairment and Privacy: What Lies Ahead for Driver Impairment Detection?

The 2021 Infrastructure Act mandates that the US Department of Transportation issue a rule requiring the creation and implementation of monitoring systems to deter drivers impaired by alcohol, inattention, or drowsiness. The Department of Transportation (DOT) must establish a Federal mandatory motor vehicle safety standard to “passively monitor a motor vehicle driver’s performance to accurately detect if the driver may be impaired.”  (“Advanced Drunk and Impaired Driving Prevention Technology,” Sec. 24220(b)(1)(A)(i)). Details in the statute are sparse; the DOT’s rule will likely establish many practical and technical details that the statute does not address.

Among the actions required under the 2021 law, the DOT is required to set a safety standard for the use of blood alcohol detection technology within three years, after which vehicle manufacturers will have between two-three years to install the systems in all new passenger motor vehicles manufactured after the effective date. In practice, such systems will be required for all new vehicles beginning November 2026, although they could be rolled out sooner. DOT’s National Highway Traffic Safety Administration (NHTSA) will lead the rulemaking. 

Today, many car makers offer different types of driver assistance technologies that can reduce crashes and improve safety for drivers, passengers, pedestrians, cyclists, and other road users. The Advanced Impaired Driving Technology mandated by the Infrastructure Act may further enhance safety, but its implementation must address its impact on drivers and passengers, including the collection and use of their personal data. Driver acceptance of auto safety measures is particularly important; previous attempts to mandate seat belt use resulted in public backlash and quick Congressional rollback. As the DOT, automakers, safety advocates, and other stakeholders think through issues raised by the Infrastructure Act’s mandate, questions must be considered regarding the accuracy of the technology, implementation by carmakers, driver acceptance, and the overall impact on the privacy of drivers and passengers:

1. Accuracy – The accuracy of the Advanced Impaired Driving Technology will be key to the proper, non-biased functioning and use by drivers. 

2. Data Collection and Use – Cars already collect significant amounts of data for a variety of functional, safety, and other purposes. Policymakers tasked with establishing standards and guidelines for the Advanced Impaired Driving Technology should ensure that any personal data collected or retained is limited, reasonable, and processed safely and fairly. 

3. Driver Acceptance – Stakeholders must decide what should occur in situations where the technology detects that a driver is intoxicated.

Accuracy

The accuracy of Advanced Impaired Driving Technology will be central to the proper functioning and use by drivers. The law requires that the technology must “accurately identify whether the driver may be impaired” but does not specify particular accuracy thresholds. Policymakers at DOT and NHTSA will need to encounter this issue as they conduct the rulemaking and establish the rule and should consider testing to ensure that technology is fair, targeted, and consistently highly accurate.

The concept of using technology to detect driver impairment, including blood alcohol concentration (BAC) levels, is not novel. For example, in some jurisdictions, a person with a DUI conviction can be required to use ignition interlocks – devices in vehicles that detect BAC through breath (commonly known as breathalyzers). Every time a person gets into a car, the ignition interlock device requires them to breathe into a tube. While commonly used in ignition interlocks and during traffic stops, breathalyzers are not 100% accurate and cannot achieve the same level of accuracy as blood or urine tests. Errors from breathalyzers can result from both miscalibration or human error. As DOT considers rules for Advanced Impaired Driving Technology, the agency will likely need to consider other potential sources of inaccuracy, including mischaracterization of lawful substances (such as mouthwash) as intoxicants or misattribution of passenger intoxication to an unimpaired driver.

Existing technologies used by car makers today to detect driver impairment may or may not be sufficient to meet the statute’s requirement for accuracy. For example, carmakers have available safety features that include driver-facing cameras and automated detection of erratic driving. An alert from these driver monitoring technologies may reflect that a driver is intoxicated, but not always. An alert can also result from drowsiness, distraction, or emotional stress. On the other hand, an intoxicated driver may not display typical impairment signs. As a result, it is not clear that existing technologies meet the law’s mandate of “accurately identify[ing] whether the driver may be impaired.”

Data Collection and Use

As vehicle manufacturers begin to design and implement technologies that detect blood alcohol concentration (BAC), DOT and NHTSA should ensure that guidelines for any personal data collected are clear, limited, reasonable, and require data to be processed safely and fairly. Many cars already collect significant amounts of data about drivers and passengers. This data may be collected to support safety, maintain efficient performance, increase driver or passenger convenience, or related to in-car entertainment. For instance, the “infotainment” features in a car can be connected to the radio and other apps that collect information on the music listened to or when it is turned on or off. 

It is not clear whether Advanced Impaired Driving Technology will need to collect, share, or retain detailed records in order to function. For example, a vehicle could incorporate a method to delete on-device data or automatically delete data on a rotating basis when it is no longer necessary. However, there could be significant reasons to collect and retain data, including to test for and ensure accuracy or allow for a driver to dispute the interference. 

Car manufacturers must also be transparent about options for data re-use or for third-party data access. Commercial entities, advocates, or government agencies may wish to obtain access to (or the ability to access) individual or aggregate information from Advanced Impaired Driver Technologies. Collecting data on a driver’s intoxication level could have implications for insurance risk scores, driver safety scores, public safety, and other purposes that impact drivers and passengers.

Driver Acceptance

Finally, stakeholders must consider what should occur in situations where the technology detects that a driver is intoxicated. The mandate states that the technology should “prevent or limit vehicle operation” but does not specify how that should or would occur. For example, when a driver is flagged as impaired, the technology may prohibit vehicle operation, but other options may include requiring extra steps to operate the vehicle, displaying warnings, limiting the speed of the vehicle, or alerting the owner or a designated contact registered to the car. 

Auto manufacturers could design systems that would prevent a car from starting or limit its performance if a driver is intoxicated. This may prevent or limit impaired driving, a serious problem in the US, on average 32 people die every day in drunk driving crashes. However, a false positive could cause surprise or concern for drivers who do not expect to be prevented from driving, including in emergencies or unexpected situations. For some drivers, it could create new safety concerns if they are unable to leave situations in which they are facing threats to their physical safety. This could also impact consumer attitudes and engagement with this technology. Alternatively, a vehicle could display warnings, reduce vehicle functionality, or introduce friction at the point of starting the car.

What’s Ahead?

There are many open questions about this mandate and how the technology will work, and the effect on automakers and consumers. The DOT has two additional years to meet the Congressional deadline to establish standards for the use of blood alcohol detection technology in vehicles. Meanwhile, vehicle manufacturers should be thinking ahead and engaging with NHTSA about how they can implement such features in new vehicles. FPF will continue to monitor the regulatory process, including privacy implications of this technology and how it may be implemented in the future. 

Record Set: Assessing Points of Emphasis from Public Input on the FTC’s Privacy Rulemaking

More than 1,200 law firms, advocacy organizations, trade associations, companies, researchers, and others responded to the Federal Trade Commission’s Advance Notice of Proposed Rulemaking (ANPR) on “Commercial Surveillance and Data Security.” Significantly, the ANPR initiates a process that may result in comprehensive regulation of data privacy and security in the United States, and also marks a notable change from the Commission’s historical case-by-case approach to addressing consumer data misuse. Comments received in response to the ANPR will be used to generate a public record that informs the Commission in deciding whether to pursue one or more draft rules, and will be generally available for any policymaker to utilize in future legislative proposals. The Future of Privacy Forum’s comment is available here

Using a sample of 70 comments, excluding our own, selected from stakeholders representing various sectors and perspectives, the Future of Privacy Forum analyzed responses for common themes and areas of divergence. Below is a summary of key takeaways.

1. Areas of Agreement

a. Data Minimization

Many submissions encouraged the Commission to create a rule or standard requiring that companies engage in some form of data minimization. Data minimization is a foundational data protection principle, appearing in the Fair Information Practice Principles (FIPPs) and required by the European Union’s General Data Protection Regulation (GDPR) and other international regulations. The European Data Protection Supervisor (EDPS) emphasized that an FTC data minimization rule would help harmonize data protection standards between the European Union (EU) and the United States (U.S.), and would codify the data protection best practices established by the Commission’s history of enforcement. Several comments focused on the ability of data minimization to create “market wide” incentives that could disrupt an environment that may provide competitive advantages to organizations who are not responsible data stewards, while Palantir noted that, unlike the exercise of data subject rights, data minimization requires no extra action from users.

A small group of responses noted that data minimization has implications for machine learning (ML) and the development of artificial intelligence (AI) systems. While such systems must be trained on vast quantities of data, commenters noted that it is equally important that such data be high quality. Palantir emphasized that data minimization, insofar as it required the deletion of out-of-date or otherwise flawed data, would support this goal. EPIC noted that data minimization requirements would help ensure that businesses use of personal data is aligned with consumer expectations, observing that, “[c]onsumers reasonably expect that when they interact with a business online, that business will collect and use their personal data for the limited purpose and duration necessary to provide the goods or services they have requested,” and not retain their data beyond that duration or for other uses. Finally, Google, the Wikimedia Foundation, and other commenters emphasized that a data minimization rule would support data security objectives as well: if companies retain less personal data, data breaches, when they do occur, will be less harmful to consumers.

b. Data Security

There was also broad, though not uniform, consensus around support for a data security rule requiring businesses to implement reasonable data security programs. Many commenters noted that data security incidents are a common occurrence, are not reasonably avoidable by consumers, and pose grave risks to individuals, including that of identity theft. The EDPS underscored the role of data security in protecting core rights and freedoms under EU law and the GDPR, and recommended that the Commission require organizations to engage in data processing impact assessments as well as data protection by design and default, and use encryption and pseudonymization to protect personal data.

The Computer & Communications Industry Association (CCIA) observed that any data security rulemaking should be harmonized with standards established by the Cybersecurity and Infrastructure Security Agency (CISA) and the National Institute of Standards and Technology (NIST). In this same vein, the Software Alliance (BSA), “encourage[d] the agency…to recognize that the best way to strengthen security practices cross industry sectors is by connecting any rule on data security to existing and proposed regulations, standards, and frameworks.” The BSA also asked that the Commission recognize the “shared responsibilities of companies and their service providers” in protecting consumer data, and create rules that reflect this dual-responsibility framework, as well as the different relationships that companies versus service providers have with consumers. Some commenters discussed how the Commission’s rulemaking should interact with other agencies. For example, the Center for Democracy and Technology (CDT) emphasized that reasonable data security requirements should extend to the government services context, including to government-run educational settings, as well as to non-financial identity verification services.

2. Areas of Contention

a. The Commission’s Authority

By far, the most common disagreement among commenters was whether, and to what extent, the Commission could promulgate data privacy and security regulations through their statutory authority. Most commenters took a moderate approach–believing that the Commission had some level of limited rulemaking authority in the space. Commenters provided a variety of bases for where the Commission could and could not create rules, and why. Some entities believed that the Commission could only address practices that clearly demonstrate consumer harm, while others encouraged the Commission to focus on FTC enforcement actions that have already survived judicial scrutiny. Google noted that the “FTC rests on solid ground” for a data security rule given the Third Circuit’s decision in FTC v. Wyndham Worldwide Corp, which affirmed the agency’s authority to regulate data security as an unfair trade practice. While many commenters also argued that the Commission could only create rules that would not overlap with other regulatory jurisdictions, some advocates believed that the FTC can use its authority to regulate unfair and deceptive practices like discrimination even when other agencies have concurrent jurisdiction. The Lawyers Committee for Civil Rights noted the FTC’s extensive experience sharing concurrent jurisdiction with other agencies, where “[i]t works on ECOA with the Consumer Financial Protection Bureau, on antitrust with the Department of Justice, and on robocalls with the Federal Communications Commission.”

However, comments also existed at both ends of the spectrum in regard to the FTC’s authority to act. Some commenters, largely from civil society organizations, civil rights groups, and academia, argued that the Commission has substantial authority to address data privacy and security where data practices meet the statutory requirements for being “unfair or deceptive.” These commenters reasoned that Congress intended the Commission’s Section 5 authority to maintain dexterity to address evolving commercial practices, and thus can be easily applied to data-driven activities that cause unavoidable and substantial injury to consumers. 

Other commenters, largely comprising trade associations, business communities, and some policymakers questioned the Commission’s authority to conduct this rulemaking under the FTC Act. While some, like the Developers Alliance, argued that most data collection practices do not constitute “unfair or deceptive” trade practices, the majority of commenters in opposition argued that the Commission lacks the authority to conduct this type of rulemaking because the regulation of data privacy and security is a “major question” best served through Congress. These comments focused on the Supreme Court’s 2022 ruling in West Virginia v. EPA, holding that regulatory agencies, absent clear congressional authorization, cannot issue rules on major questions that affect a large portion of the American economy. Several Republican U.S. Senators noted that “simply stating within the ANPR that within Section 18 of the FTC Act, Congress authorized the Commission to propose a rule defining unfair or deceptive acts or practices…is hardly the clear Congressional authorization necessary to contemplate an agency rule that could regulate more than 10% of U.S. GDP ($2.1 trillion) and impact millions of U.S. consumers (if not the entire world).” The lawmakers further argued that even if the Commission could prove clear Congressional authorization, the rulemaking would likely violate the FTC Act because the ANPR failed to describe the area of inquiry under consideration with the mandated level of specificity.

b. “Commercial Surveillance”

The Commission’s framing of the ANPR regarding “commercial surveillance,” was another area that generated controversy. The ANPR defines “commercial surveillance” as “the collection, aggregation, analysis, retention, transfer, or monetization of consumer data and the direct derivatives of that information.” 

Several comments supported the Commission’s framing and detailed the multitude of ways in which businesses track private individuals over time and space. The Electronic Privacy Information Center (EPIC) stated, “[t]he ability to monitor, profile, and target consumers at a mass scale have created a persistent power imbalance that robs individuals of their autonomy and privacy, stifles competition, and undermines democratic systems.” The most common examples of practices considered “commercial surveillance” by commenters included: targeted advertising, facial recognition, pervasive tracking of people across services and websites, unlimited sharing and sale of consumer information, and secondary uses of consumer information. 

On the other side, commenters argued that the terminology around  “commercial surveillance” was both an unfair and overly broad characterization. Trade associations like the Information Technology Industry Council argued that the term implies a negative connotation for any commercial activity that collects or processes data, even the many legitimate, necessary, and beneficial uses of data that make products and services work for users. Many comments emphasized the crucial role of consumer data in our society and how it has been used to fuel social research and innovation, including telehealth, studying the efficacy of the COVID vaccine, the development of assistive AI for disabled individuals, identifying bias and discrimination in school programs, and ad-supported services like newspapers, magazines, and television.

3. Other Notable Focuses

a. Automated Decision-Making and Civil Rights

A large contingency of advocacy organizations documented how automated decision-making systems can exacerbate discrimination against marginalized groups. Organizations including the National Urban League, Next Century Cities, CDT, Upturn, Lawyers’ Committee for Civil Rights Under Law, and EPIC provided illustrative examples of discriminatory outcomes in housing, credit, employment, insurance, healthcare, and other areas brought about by algorithmic decision-making.

Industry groups including the National Association of Mutual Insurance Companies argued that discrimination concerns are best addressed through Congressional action, given that the FTC Act does not mention discrimination and does not answer the foundational legal question of ”whether it is a regime of disparate treatment or disparate impact.” Many advocacy groups refuted this assertion and argued for the necessity of addressing algorithmic discrimination in a rule because of gaps in existing civil rights law and because of the Commission’s history of utilizing concurrent jurisdiction. For example, Upturn highlighted three major gaps, noting that current law leaves large categories of companies, such as hiring screening technology firms, uncovered; fails to address modern-day harms such as discrimination by voice assistants; and does not require affirmative steps to measure and address algorithmic discrimination. 

Commenters made a variety of suggestions about how the Commission could address these problems in a rule, including through data minimization (National Urban League), greater transparency (CDT), declaring the use of facial recognition technology an unfair practice in certain settings (Lawyers’ Committee, EPIC), and implementing principles in the Biden administration’s Blueprint for an AI Bill of Rights (Upturn, Lawyers’ Committee). Google emphasized that any rulemaking on AI should be risk-based and process-based and promote transparency, adding that “a process-based approach with possible safe harbor provisions could encourage companies to continually audit their systems for fairness without fear that looking too closely at their systems could expose them to legal liability.”

b. Health Data and Other Considerations in Light of Dobbs

Another strong thread throughout the comments was a concern about the privacy and integrity of health data, particularly in light of the Supreme Court’s 2022 decision in Dobbs v. Jackson Women’s Health Organization. Comments from Planned Parenthood, CDT, the American College of Obstetricians and Gynecologists (ACOG), and the California Attorney General (AG) Rob Bonta all emphasized the impact of the Dobbs decision, which allows states to criminalize the acts of seeking and providing abortion services. For example, the ACOG cited a Brookings Institution article demonstrating the extent to which user data such as geolocation data, app data, web search data, and communications and payments data can be used to make sensitive health inferences.

Responding to concerns about the risk of misuse of geolocation data specifically, Planned Parenthood called upon the Commission to write tailored regulations requiring that the retention of location data be time-bound and linked to a direct consumer request. The Duke/Stanford Cyber Policy Program emphasized that the Commission should seek to establish comprehensive regulations to govern data brokers, and that, “[i]n some cases, the policy response should include restrictions or outright bans on the sale of certain categories of information, such as GPS, location, and health data.” AG Bonta recommended that, “[t]he Commission…prohibit [the] collection, retention or use of particularly sensitive geolocation data, including…data showing that a user has visited reproductive health and fertility clinics.”

Many comments addressed questions around sensitive health-related data that is not otherwise protected by the Health Insurance Portability and Accountability  Act (HIPAA). ​​The College of Healthcare Information Management Executives (CHIME) emphasized that many consumers do not understand the scope or scale of the use of their sensitive health data, including data collected by fitness and “femtech” apps. The American Clinical Laboratory Association (ACLA), meanwhile, emphasized that the Commission should not subject entities already subject to HIPAA to new requirements, and argued that de-identified data should be exempt from privacy and security protections. Finally, algorithmic discrimination in the healthcare context was a focus area for several commenters.

c. Children’s Data

Finally, many commenters also weighed in on the particular vulnerability of children online. The Software & Information Industry Association (SIIA), for example, recognized that children deserve unique consideration, but argued that FTC rulemaking on child and student privacy would be duplicative of existing Commission efforts to update COPPA rules, as well as existing education privacy statutory provisions at the federal and state levels. Others suggested that a Commission rule could and should address child safety. Some of their most pressing concerns included:

What’s Next?

The ANPR is merely one step along a lengthy and arduous rulemaking process. Should the Commission decide to move forward with rulemaking after reviewing the public record, they will need to notify Congress, facilitate another public comment process on a proposed rule, conduct informal hearings, and survive judicial review. Regardless of the outcome, the ANPR comment period has provided an ample public record to inform any policymaker about the current digital landscape, the most pressing concerns faced by consumers, and frameworks utilized by companies and other jurisdictions to mitigate privacy and security risks.

The authors would like to acknowledge FPF intern Mercedes Subhani for her significant contributions to this analysis.