The Future of Manipulative Design Regulation
Regulators in the United States and around the globe are bringing enforcement actions and crafting rules intended to combat manipulative design practices online. These efforts are complex and address a range of consumer protection issues, including privacy and data protection risks. They raise thorny questions about how to distinguish between lawful designs that encourage individuals to consent to data practices, and unlawful designs that manipulate users through unfair and deceptive techniques. As policymakers enforce existing laws and propose new rules, it is crucial to identify when the design and default settings of online services constitute unlawful manipulative design that impairs user’s intentional decision-making.
This post describes the current U.S. regulatory stance regarding manipulative design (also called deceptive design or “dark patterns”), highlighting major trends and takeaways. We start by reviewing how leading experts have distinguished between persuasive design and manipulative design. We then explain the most prominent rules that address manipulative design in the data protection context, as well as emerging proposals that seek to further regulate manipulative design.
Recent laws and emerging proposals largely use a common approach to crafting these rules. Many focus on the role of design in consent flows and bar online services from acting, “to design, modify, or manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice.” Policymakers are particularly focused on analyzing the quality of organizations’ notices, the symmetry between consent and denial clickflows, the ease of cancellation or revocation of consent, and the presence of design features associated with compulsive usage. Similar but narrower rules also appear in laws and proposals that seek to protect specific vulnerable or marginalized populations, including young people.
Finally, we discuss the opportunities and challenges in applying anti-manipulative design rules to specific business sectors or practices. While narrower approaches have been codified in some circumstances – including with regard to young users – other attempts to legislate in specific, narrow contexts remain mere proposals.
Introduction
Enforcement agencies are increasingly focused on manipulative design practices online. In 2021, the Federal Trade Commission (FTC or agency) convened a workshop regarding “digital dark patterns” and has since demonstrated a heightened interest in bringing enforcement actions that target manipulative designs. In December 2022, the agency announced a $245 million settlement with Epic Games arising from allegations that the gaming company used “dark patterns” in its popular Fortnight game franchise. The FTC complaint argues that Epic used “design tricks” to lead users to make accidental or unauthorized purchases, and blocked accounts when users sought refunds. The settlement follows prior enforcement actions that cited manipulative design as a key element of the Commission’s claim that online services deceived consumers in order to collect or use data.
This increased scrutiny means that privacy practitioners are confronted with the difficult task of understanding, recognizing, and avoiding manipulative design. A central challenge is to distinguish between manipulative design and lawful persuasive practices. There are few straightforward answers. Experts often disagree about which design practices are prohibited by existing laws – including prohibitions against unfair and deceptive practices (“UDAP” laws) – as well as the optimal scope for proposed new regulatory efforts.
What is Manipulative Design?
Organizations employ lawful, persuasive design every day when seeking to make a product or service look appealing or to truthfully inform consumers about the features of digital services and the available privacy options. Indeed, all design interfaces must constrain user autonomy to some extent in that they provide a limited number of choices within a mediated environment. In contrast, manipulative design tricks individuals or obfuscates material information. For example, manipulative design elements can steer individuals into unwittingly disclosing personal information, incurring unwanted charges, or compulsively using services.
Researchers and advocates have proposed numerous approaches to classifying and categorizing manipulative design techniques. These approaches catalog forms of troubling user experience architecture ranging from the straightforwardly deceptive, such as “sneak into basket” (“[a]dding additional products to users’ shopping carts without their consent”), to the more ambiguous, such as “confirmshaming” (when organizations use emotion-laden language to pressure users to make purchases or maintain subscriptions). Often, illegal manipulative design prevents individuals from taking desired actions, such as canceling a subscription, or obscures information about the terms of an agreement.
The breadth and nuance of deceptive design taxonomies raises challenges for enforcing existing rules and drafting new regulations. It is difficult to treat the diverse range of conduct that falls under the umbrella category of manipulative design under a single regulatory framework. Furthermore, not all practices identified as “dark patterns” in taxonomies rise to a level of harm or deception that established consumer protection law prohibits or proposed legislation would bar. Ambiguity abounds when experts debate whether a particular practice is unlawful, including practices like delivering emotionally-worded messaging to users or establishing default settings that some individuals love but others loathe. This creates tricky problems for policymakers and practitioners seeking to distinguish between lawful and unlawful design techniques, and a dominant approach to analyzing these issues has yet to emerge. Still, some central themes are developing, as policymakers focus on factors such as the quality of notice, symmetry between consent and denial clickflows, ease of cancellation or revocation of consent, and the use of design features associated with compulsive usage as hallmarks of unlawful manipulative design. The next section provides an overview of contemporary regulatory treatment of manipulative design in the privacy context, and then explores emerging attempts to demarcate illegal manipulative design in draft bills.
Prohibitions on Manipulative Design in Privacy Laws
The following charts provide an overview of the treatment of manipulative design in state privacy laws and federal draft bills. Manipulative design has important implications for consumer privacy. User consent obtained through manipulation, because it is neither “informed” nor “freely-given,” undermines the basis of any notice and consent regime for data collection, use, and sharing. Platforms intending to employ manipulative design techniques can be incentivized to increase their data collection, so that they might use this data to determine what makes users particularly susceptible to specific sales pitches or invitations to opt-in. And deceptive design can be deployed to manipulate and extract data from users in ways that undermine their privacy interests. In light of these and other concerns, in recent years policymakers have begun to include prohibitions against the use of manipulative design in privacy legislation. Most often, these prohibitions take a narrow approach to manipulative design, focusing on the context of consent.
Federal Privacy Bills
Chart 1: Federal Privacy Bills
State Privacy Laws
The vast majority of legislation that seeks to restrict or prohibit the use of manipulative design online in the U.S. is directly rooted in the Deceptive Experiences To Online Users Reduction Act (DETOUR) Act, first introduced in Congress in 2018 by Senators Warner (D-VA) and Fischer (R-NE), but never passed into law. The DETOUR Act would forbid websites, platforms, and services from acting “to design, modify, or manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.” While the DETOUR Act did not use the term “dark patterns,” variations and expansions of the bill’s language are replicated in more recent laws and bills that explicitly prohibit the use of “dark patterns” or define the elements of valid consent to exclude “dark patterns.”
The following sections explore: 1) the influence of the DETOUR Act’s language, as well as the emergence of other approaches to restricting manipulative design practices in the U.S; 2) the emerging trend of regulators seeking to restrict the use of manipulative design beyond the consent context, such as to encourage users to share data or to make certain choices; and 3) laws and bills that seek to restrict the use of one form of manipulative design in particular, such as designs that encourage addiction or compulsive usage of social media, content streaming, and gaming platforms.
1. The Impact of the DETOUR Act’s Language (and Its Variations)
The above charts demonstrate a common trend in comprehensive privacy proposals: recycling the DETOUR Act’s language to prohibit the use of manipulative design in the context of consent. A number of state laws include a narrowed version of the DETOUR Act’s phrasing, leaving out the “or user data” clause, including the California Consumer Privacy Act (CCPA), the Colorado Privacy Act (CPA), and the Connecticut Data Privacy Act (CDPA) (see Chart 2). The leading comprehensive privacy bill in the 2022 Congress (The American Data Privacy and Protection Act, or ADPPA), includes DETOUR Act-like language to define valid consent (see Chart 1). The DETOUR Act’s language is becoming an American export as well: Recital 67 of the recently-passed EU Digital Services Act (DSA) prohibits the “providers of online platforms” from “deceiving or nudging recipients of the service and from distorting or impairing the autonomy, decision-making, or choice of the recipients of the service via the structure, design or functionalities of an online interface or a part thereof” (emphasis added).
The proliferation of the narrowed version of DETOUR Act’s language in laws seeking to restrict different types of designs and conduct raises two major considerations for policymakers: 1) efforts to regulate manipulative design may be duplicative of existing laws, including laws that prohibit unfair and deceptive practices; and 2) applying the DETOUR Act’s language solely in the context of consent to data processing may fail to address some key harms that arise from manipulative design techniques.
First, many manipulative design categories and practices may already be prohibited under U.S. law. Specifically, as commentators have noted, numerous forms of manipulative design are likely illegal under federal and state UDAP laws. Examples include designs that make it impossible to unsubscribe from a service or that surreptitiously add items to a user’s online shopping cart before checkout. Recent FTC enforcement actions against companies that have trapped consumers in payment plans without the ability to easily unsubscribe or coerced them into sharing their sensitive data demonstrate that regulators have the legal authority to enforce against the use of manipulative design. Lawmakers should evaluate the extent to which regulators are empowered to take action against manipulative design practices under existing laws, and use their findings to inform the creation of any new regulatory regimes.
Second, policymakers should examine the practical value of the application of the DETOUR Act’s language solely in the context of consent in comprehensive privacy regulation. While the major U.S. privacy regimes require user consent in different contexts, they typically establish the same standard for valid consent: “freely given, specific, informed, and unambiguous.” This is a high standard derived from European law, and it is not clear what an explicit statutory prohibition against manipulative design in obtaining consent meaningfully adds to these requirements. For example, the use of design elements that induce false beliefs or hide or delay disclosure of material information would seem to render “informed” and “unambiguous” consent impossible. At the same time, statutory language that exclusively targets consent and privacy does not squarely address harmful examples of manipulative design that have been identified outside those contexts, such as “obstruction” (“[m]aking it easy for the user to sign up for a service but hard to cancel it”) and deceptive “countdown timers” (falsely “[i]ndicating to users that a deal or discount will expire using a counting-down timer”). As they consider new rules, lawmakers should carefully consider the activity they seek to target with prohibitions on the use of manipulative design, along with as the most useful format and context for such requirements.
2. Legislation that Addresses Manipulative Design Beyond Consent
While the DETOUR Act’s language regarding manipulative design applies only to consent and data collection, and much of the legislation that adapts the DETOUR Act’s language narrows its scope even further to just the consent context, recently lawmakers have crafted manipulative design provisions that apply to other types of interfaces and conduct. For example, the California Age Appropriate Design Code (CA AADC) prohibits services targeted at young people from using “dark patterns” to steer youth into sharing personal data or to act in a way that a “business knows, or has reason to know, is materially detrimental to” a child’s physical or mental health or “well-being.” Depending on how the CA AADC’s “materially detrimental” standard is interpreted, the law’s language could be construed as applying to design features far beyond consent, such as algorithmically-selected content, music and video feeds, and other core features of child-directed services.
Child and privacy advocates view robust regulation of manipulative design as a critical component of laws, such as the CA AADC, that aim to protect the rights of marginalized, multi-marginalized and vulnerable individuals, who may be less able to identify and protect themselves against deceptive features. For example, children, teenagers, the elderly, and non-native english speakers may be particularly susceptible to certain deceptive tactics, such as “trick questions” (“[u]sing confusing language to steer users into making certain choices”). These heightened risks suggest that stronger protections may be necessary for vulnerable individuals or in connection with services targeted at such users. Others have suggested protections for mobile-first users, preventing bad actors from deceiving individuals who access the internet on a mobile device by placing crucial information at the end of long disclosures or agreements.
If broad bans on manipulative design become the norm, however, policymakers should be careful to define the scope of those prohibitions with precision, and should be wary of creating compliance burdens for businesses that do not have corresponding benefits for consumers. Policymakers considering new restrictions on manipulative design outside the narrow consent context must examine whether applying the DETOUR Act’s prohibition on “dark patterns” could be overinclusive, extending to de minimis conduct or even beneficial practices that should not be restricted as a matter of public policy. All online design interfaces must ‘constrain’ user ‘autonomy’ to some extent in that they provide a restricted set of choices within a digital environment. While the DETOUR Act’s “substantial effect” standard provides an important limiting principle, the development of more specific language could help to ensure that regulators enforce against truly harmful design without creating overinclusive definitions.
3. Approaches that Focus on Particular Sectors or Practices
Another approach to regulating manipulative design has been to write bills targeting specific types of design viewed as particularly pernicious, such as those that encourage compulsive usage. For example, two recent federal bills, the Social Media Addiction Reduction Technology Act (‘SMART Act”), which would outlaw auto-refreshing content, music, and video feeds without “natural stopping points,” and S.1629, which would would regulate the use of loot boxes, which can encourage addictive, gambling-like behavior in gaming. The ‘New York child data privacy and protection act’ (S.B. 9563), would grant the New York attorney general the authority to “ban auto-play, push notifications, prompts, in-app purchases, or any other feature in an online product targeted towards child users that it deems to be designed to inappropriately amplify the level of engagement a child user has with such product.” Advocates recently petitioned the FTC to make rules “prohibiting the use of certain types of engagement-optimizing design practices” in digital services used by young people, including practices like “low-friction variable rewards,” “design features that make it difficult for minors to freely navigate or cease use of a website or service,” and “social manipulation design features.”
Unlike the CA AADC, which addresses deceptive design broadly, these proposals focus on manipulative design patterns that lead users to engage in compulsive usage of particular platforms and services. The tighter focus of these proposals may provide more clarity than bills that seek to regulate manipulative design more widely. However, bills of this nature have largely not been enacted, perhaps because the design features they propose to regulate are welcomed by some users and disliked by others. For example, many consumers seem to enjoy, and seek services that offer, auto-play and infinitely refreshing music or content feeds. Others object to these exact design elements, stating that these features induce or exacerbate compulsive behavior. These competing views may resolve as researchers continue to explore the addictive nature of certain online platforms and activities, as well as the potentially dire consequences of social media, gaming, and online gambling addictions. Risks and regulatory strategies may be different for adults as compared to children and adolescents.
While more narrow than other manipulative design limitations, the range of engagement-fostering designs targeted by bills like these is still quite broad, and, moving forward, policymakers must reckon with the difficulty of drawing clear lines between a platform trying to appeal to consumers and offer desired content recommendations versus attempting to encourage compulsive usage. The drafters of such bills should aim to be as clear as possible about the exact conduct they seek to prohibit, such as designs that trick children into unintentionally disclosing sensitive data or that have been reliably shown to cause financial or emotional harm. Bills targeting specific forms of manipulative design will likely be most effective when they specially forbid demonstrably harmful designs and provide clarity about the exact scope of prohibited conduct.
Conclusion
Increased U.S. legislative and enforcement activity regarding manipulative design is an emerging trend that is likely to continue, both inside and outside the context of privacy legislation and enforcement actions. However, there remains uncertainty regarding what forms of design are illegal under existing authority and the appropriate scope of enforcement. Thus far, the dominant trend has been for lawmakers to ban the use of manipulative design in obtaining consent for data collection and processing. However, several newer privacy laws and proposals address manipulative design both more broadly and more specifically, in ways that could have significant impacts on a range of digital services.
Regulated entities and user experience (UX) designers looking to the future can and should use long-standing legal and ethical concepts around fairness, fraud, and deception as touchstones to guide their design decisions. Still, the ongoing developments and uncertainty in the regulatory framework surrounding manipulative design mean that any entity operating a consumer-facing interactive interface must continue to pay close attention to legal developments, as well as to conversations among consumer advocates and the design community about forms of UX design that appear to harm individuals.