Manipulative and Deceptive Design: New Challenges in Immersive Environments
With help from Selin Fidan, Beth Do, Daniel Berrick, and Angela Guo
Immersive technologies like spatial computing, gaming, and extended reality (XR) offer exciting ways to experience and engage with the world. However, interfaces for immersive technologies that further blur the lines between the physical and the virtual may also open the door to new, potentially more effective types of manipulative and deceptive design. Although scholars, regulators, and lawmakers have begun addressing so-called “dark patterns” in traditional online spaces, it is critical to understand the ways these design practices may manifest differently across different mediums, particularly in novel interfaces. Being able to identify how different choices may constitute manipulative and deceptive design in immersive environments is a key first step toward ensuring that new products and services are designed and built in a way that protects against harmful effects on privacy, security, safety, and competition.
“Dark Patterns”: A Primer
Design choices can significantly impact how information is presented. Organizations often utilize lawful, persuasive design choices in order to make their products or services look appealing or inform individuals about their features. Manipulative and deceptive design, by contrast, refers to the practice of designing a service or application in a way that leads users toward decisions or behavior they may not have otherwise chosen, often in a way that does not serve their best interests. For example, using intentionally confusing wording or emotionally charged language to trick users into sharing more information would likely be considered manipulative or deceptive.
The line distinguishing merely persuasive design practices from “manipulative” or “deceptive” ones can be ambiguous. To some extent, all interfaces, whether online or in the physical world, will steer user behavior or constrain their choices, and determining when it becomes unacceptable is a matter of open debate. Although scholars, practitioners, and regulators have developed taxonomies for defining and classifying so-called “dark patterns,” crafting appropriately scoped legislation and regulations that prevent harmful practices without restricting reasonable design practices is challenging.
In the context of data protection, regulations related to manipulative and deceptive design often focus on consent flows for data collection and use. Because manipulative and deceptive design practices may facilitate consent that is not truly “informed” or “freely-given,” regulators have indicated that these practices threaten the notice and consent regime underpinning most U.S. privacy law. Not only do such practices undermine individual autonomy and potentially cause direct harm, they may also create market distortions that hurt competition and limit user options. Beyond practices that obscure or subvert privacy choices, the Federal Trade Commission (FTC)—which has led enforcement against “dark patterns” as part of its FTC Act Section 5 authority—has also specifically drawn attention to design elements that induce false beliefs, hide or delay important information, lead to unauthorized charges, and make it difficult to cancel subscriptions.
New Manifestations of Manipulative and Deceptive Design in Immersive Environments
Many of the manipulative and deceptive design practices that exist in traditional web and mobile environments can also be found in immersive environments, and organizations operating in this space should be careful to avoid them. XR and virtual world applications, for example, are just as prone to practices such as visual interference and nagging as traditional online spaces. However, immersive technologies’ unique qualities and characteristics may also open the door to new, potentially more effective forms of manipulation and deception.
Some aspects of immersive technologies that may lend themselves to new or stronger manipulative design include heightened realism and blending of virtual and physical elements. These characteristics could make it easier to subtly alter a person’s perception of reality or convince them to engage in certain behavior, particularly when combined with advanced forms of AI that closely mirror human behavior or genuine experiences. Additionally, immersive technologies’ collection and aggregation of large amounts of personal data, including novel data types like eye gaze, creates further privacy risks. Often, this will involve data types and uses with which users are unfamiliar, putting them at an information disadvantage when making decisions about how to engage with applications. Finally, immersive technologies often provide individuals with novel interfaces and modes of interaction, as well as increasingly realistic AI-generated content, making immersive environments particularly conducive to manipulative or deceptive design patterns. While immersive interfaces may, if done correctly, help improve user education and facilitate more informed consent, they could also be exploited to trick users.
Examples of potential manipulative design in immersive environments: blocking important disclosure information with design elements. Source: Wang, Lee, Hernandez, & Hui
What makes immersive technologies so powerful in healthcare, education, and entertainment contexts may also make them more prone to manipulative use. The immersive elements of these technologies described above, in addition to the ability to create multi-modal experiences combining visuals, audio, text, and even haptics, present more opportunities for enhanced persuasion, as well as more mechanisms through which a motivated actor could obscure, hide, or misrepresent information to a user. Devices like neurotechnologies that can access an individual’s brain activity, for example, may allow bad actors to not only analyze their mental state, but potentially alter it as well.
To avoid unintentionally deploying a product or service with a deceptive design element, organizations should design disclosures in a way that harnesses immersive technologies’ strengths and provides effective user education about new data types and uses. Organizations should invest in ensuring that regulators and the general public are able to develop a practical understanding of how sophisticated manipulative and deceptive design techniques can emerge in immersive spaces, given novel technological capabilities and data sources. While some researchers have begun studying manipulation in immersive technologies, more research will also be needed to develop both theoretical and empirical accounts of the mechanisms by which users are manipulated or deceived. Table 1 below illustrates what such practices could include.
Table 1: Potential manipulative and deceptive design in immersive technologies
Manipulative or deceptive design practices | Examples |
Driving users towards certain behavior, or blocking from behavior, using design patterns, lighting, sound, or haptics, in a way that is not in the user’s best interest. | Directing users’ attention away from an important notice by causing controllers to rumble in certain ways at certain times. |
Using lighting, interface design, or data about where a user is looking to hide or obscure relevant information, or make certain desired behaviors more likely. | Using eye gaze data to determine where a user is looking, and placing a privacy disclosure out of their view. Or, using lighting in the physical world to block part of a virtual disclosure box, preventing a user from opting out of data collection or use. |
Using immersive technology’s heightened realism and immersion to play on certain emotions or associations to persuade a user to do or not do something. | Having avatars of a user’s loved ones deliver messages or endorsements. |
Digitally presenting a product or service in a misleading way, deceiving the user and causing them to make a purchase when they may not have, had they been presented a more accurate representation. | Virtual try-on application depicting a product in an inaccurate way (false advertising). |
Using personal data to make inferences about a user’s mental state for the purpose of getting the user to engage in an action when they are most vulnerable. | Inferring when a user is upset, based on personal data, and targeting them with particular ads or asking to divulge more data. |
Altering or editing elements of the physical world with digital content in order to change a user’s perception in a harmful way. | Superimposing a brand, logo, or message onto a person, physical object, or location without consent. |
Pushing users towards certain physical locations that might be in the designer’s best interest but not necessarily the user’s. | Using eye gaze data or haptics to direct a user towards a location for advertising purposes. |
Generative AI Increases Risks for Manipulative and Deceptive Design in Immersive Tech
Immersive, data-rich environments may also be fertile ground for AI-driven agents that create highly targeted influence campaigns tailored for each person based on large amounts of their personal data, and responsive to their behavior in real time. A study by XR pioneer Jeremy Bailenson demonstrated that when political candidates’ faces were subtly edited to make them look more like the study participant, they were more likely to vote for that candidate. A motivated actor armed with intimate user data and powerful AI tools could exploit these human tendencies in order to sway elections, undermine consumer autonomy, or sow disinformation. The combination of these two powerful technologies—AI that can learn about people in real-time, and immersive technologies that convince the body that a virtual experience is actually physical—could supercharge the effectiveness of manipulative and deceptive design practices.
Regulating Manipulative and Deceptive Design in Immersive Environments
Although there is no federal law against manipulative or deceptive design, the FTC has authority under Section 5 of the FTC Act to protect people from “unfair” and “deceptive” acts and practices. It has used this authority to go after alleged “dark patterns” that cause or are likely to cause unavoidable harm that isn’t outweighed by other benefits (see Table 2 below for examples). The FTC looks for particular patterns when determining whether a given practice is manipulative or deceptive. It also enforces a number of general consumer protection laws and regulations that may regulate manipulative or deceptive design, such as the Restore Online Shoppers’ Confidence Act (ROSCA), Controlling the Assault of Non-Solicited Pornography and Marketing Act (CAN-SPAM), Telemarketing Sales Rule (TSR), Truth in Lending Act (TILA), Children’s Online Privacy Protection Act (COPPA), and Equal Credit Opportunity Act (ECOA).
Table 2: FTC manipulative or deceptive design cases and enforcement actions
Case | Description of FTC’s Allegations | Order |
FTC v. Vonage (2022) | Allegedly created a “panoply of hurdles” for consumers to cancel recurring service plans, charged these consumers with a previously undisclosed “early termination fee,” and frequently continued charging subscription fees after cancellation. | Vonage must provide consumers with a cancellation option that is easy to find and use. |
In re Tapjoy (2021) | According to the FTC, used both explicit false promises and hard-to-navigate interfaces to deceptively induce consumers to part with money or personal data, waste their time, and cause frustration. | Tapjoy must not make misrepresentations about consumer rewards. |
FTC v. Age of Learning (2020) | Allegedly touted easy cancellation in promotional material, made it difficult to cancel subscriptions by providing circular forms, and auto-renewed subscriptions at the most expensive level without consumer notice or consent. | Age of Learning must not misrepresent the ease of cancellation or recurring charges, must obtain affirmative consent for renewals, and must provide a simple cancellation interface. |
In the Matter of Epic Games (2023) | The FTC alleged Epic Games charged consumers for in-game purchases, including accidentally-made purchases, without consent, and banned consumers from accessing content they paid for when they disputed these charges to their credit card companies. | Epic must not charge consumers without receiving consent, provide a simple mechanism to revoke consent for charges, not deny consumers access to their account for disputing charges, and pay a civil penalty. |
FTC v. Publishers Clearing House (2023) | According to the FTC, used manipulative phrasing and website design to mislead consumers about how to enter the company’s sweepstakes drawings, making them believe a purchase was necessary to win or would increase their chances of winning. | PCH may not make misleading claims, must make clear disclosures, end surprise fees, stop deceptive emails, and destroy some consumer data, among other things. |
FTC and State of California v. CRI Genetics (2023) | Allegedly used a complicated series of pop-ups and add-ons to push consumers to purchase additional products and services. | Proposed order would require CRI to stop its misleading claims, obtain consent, delete some consumer data, and pay a civil penalty. |
FTC v. Bridge It (2023) | According to the FTC, made it easy to sign up for membership but used a number of strategies—including confusing navigation, a variety of screens, additional offers, a multiple choice survey—to make it difficult to cancel. | Proposed order would require Bridge It to make disclosures about and obtain consent for negative option programs, provide a simple mechanism to cancel, and pay a civil penalty. |
FTC v. Floatme (2024) | The FTC alleged Floatme intentionally used design patterns to make it difficult for consumers to cancel subscriptions, and continued to offer an error-filled cancellation process even after consumer complaints. | Floatme must get consent for charges and provide an easy cancellation method. |
In addition to the FTC’s authority, manipulative and deceptive design is also regulated by provisions on “dark patterns” in certain state laws covering privacy, safety, and “unfair or deceptive acts or practices” (UDAP). Most state comprehensive privacy laws specifically prohibit using “dark patterns” to obtain user consent1, generally defining these as “user interface[s] designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision making, or choice.” This language is drawn from the Deceptive Experiences To Online Users Reduction (DETOUR) Act, a federal bill introduced in 2021. Although it did not pass, the DETOUR Act laid the groundwork for “dark patterns” provisions not just in state privacy laws but also the California Age Appropriate Design Code, the American Data Privacy and Protection Act (ADPPA), and the American Privacy Rights Act (APRA).
FPF has provided more detailed analysis on the DETOUR Act and its progeny HERE.
In draft federal and state legislation, lawmakers are also increasingly seeking to restrict manipulative and deceptive design beyond the context of consent, such as design that encourages compulsive use of a product or service. For example, legislation has been introduced to target particular types of manipulative or deceptive design, or particular audiences like children, such as:
- Social Media Addiction Reduction Technology (SMART) Act, a federal bill that would target addiction and “psychological exploitation” on social media.
- Vermont’s Age Appropriate Design Code, which would prohibit “low-friction variable reward” designs like endless scroll and autoplay.
- New York’s Child Data Privacy and Protection Act, which would grant the Attorney General the authority to ban features deemed to “inappropriately amplify the level of engagement” of a child user.
While “dark patterns” regulations will likely apply to certain practices in immersive technologies, questions remain about whether they adequately address the risks, or what impact they could have on product design. Practices that might benefit users in certain situations—such as using eye tracking to hide scene changes in order to create smoother, more enjoyable experiences—may, in other situations, manipulate or deceive users by hiding important information about privacy. With the blunt instrument of law, it may be difficult to single out only the practices that could cause harm without preventing innocuous or beneficial practices. It’s also not clear that “dark patterns” regulation, confined to the context of consent, provides any additional protections that aren’t already covered by UDAP or privacy laws, which have high standards for what constitutes proper consent. At the same time, the focus on consent, to the exclusion of other instances of manipulative or deceptive design, may ignore harmful design practices that don’t involve consent. These policy scoping questions will only become more germane as AI, neurotechnology, smart devices, and other emerging technologies pose new opportunities for manipulative and deceptive design.
Conclusion and Recommendations
Organizations deploying immersive technologies must recognize that the heightened realism, immersivity, and reliance on personal data may lead to new, potentially more powerful forms of manipulative and deceptive design, and take steps to proactively address their risks. In addition to instituting best practices to avoid manipulative and deceptive design, organizations should also create internal processes for monitoring and responding to complaints of such practices.
Organizations deploying immersive tools aren’t the only ones who will need to take proactive steps here. Researchers in both academia and industry should familiarize themselves with technologies and, specifically, how immersive environments may be particularly conducive to manipulative and deceptive design practices, and begin developing best practices for preventing them. Policymakers as well as regulators, such as the FTC and those tasked with enforcing consumer protection law, should also stay up to date on the latest research about immersive technologies and ways that they are used by individuals as well as unintended adverse impacts of any legislative or regulatory measure.
- California, Colorado, Connecticut, Delaware, Montana, New Jersey, Texas, and New Hampshire all prohibit the use of “dark patterns” to obtain consent. Oregon has a similar prohibition but does not use the term “dark patterns.” Florida’s “Digital Bill of Rights,” while not technically a comprehensive privacy law, uses the same language to prohibit “dark patterns” in obtaining consent, as well as for other practices in regards to children. A number of laws also point to the FTC’s conception and taxonomy of “dark patterns.” ↩︎