FPF Responds to the OMB’s Request for Information on Responsible Artificial Intelligence Procurement in Government
On April 29, the Future of Privacy Forum submitted comments to the Office of Management and Budget (OMB) in response to the agency’s Request for Information (RFI) regarding responsible procurement of artificial intelligence (AI) in government, particularly regarding the intersection of AI tools and systems procurement with other risks posed by the development and use of AI tools and other emerging technologies. The OMB issued the RFI pursuant to the White House’s Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
“Federal agencies are responsible for ensuring that their use of AI and generative AI tools and systems aligns with legal and regulatory standards. By establishing clear guidelines for AI procurement, OMB has a material opportunity to address and mitigate potential risks to personal data that some AI tools and systems may pose, either through design or use. In addition, as one of the largest purchasers of AI tools and systems, the U.S. government procurement policies around AI can become potent drivers for privacy, transparency, and equitable outcomes.“
– Anne J. Flanagan, FPF Vice President for Artificial Intelligence
FPF raised the importance of contractual responsibilities, existing data protection regulations, and equitable outcomes. Given these considerations, key recommendations included:
OMB should ensure that contractual responsibilities and requirements for transparency, testing, evaluation, and impact assessments in procured AI systems are based on clear definitions and roles, taking into account the risk profile of the AI system;
OMB should ensure that agencies procure AI systems or services that meet the existing data protection standards that apply to federal agencies when they handle personal data; and
OMB should ensure that agencies procure AI systems or services that support, rather than undermine equitable outcomes by requiring agencies to analyze the particular risks these systems may pose to people, especially marginalized individuals and communities.
Read our full comments to the OMB on responsible AI procurement in the government.
New Age-Appropriate Design Code Framework Takes Hold in Maryland
On April 6, the Maryland legislature passed HB 603/SB 571, the “Maryland Age-Appropriate Design Code Act” (Maryland AADC), which is currently awaiting action from Governor Moore. While FPF has already written about Maryland’s potentially “paradigm-shifting” state comprehensive privacy law, the Maryland AADC may similarly pioneer a new model for other states. The Maryland AADC seeks to create heightened protections for youth aged 17 and under and will apply to businesses that provide online services, products, or features reasonably likely to be accessed by children. Businesses that may not typically be in scope of other statutorily-created child privacy protections may find themselves with new obligations under this framework.
See our comparison chart for a full side-by-side comparison between the Maryland Age-Appropriate Design Code and California Age-Appropriate Design Code.
Who does the Maryland AADC apply to?
If enacted, the Maryland AADC will apply to businesses that provide online services, products, or features reasonably likely to be accessed by children. The applicability threshold is fairly similar to the California Consumer Privacy Act, as integrated into the California Age-Appropriate Design Code Act (California AADC). The Maryland AADC specifically captures businesses that conduct business in Maryland and meet one of three thresholds: 1) have annual gross revenue of at least $25,000,000, 2) buy, receive, sell, or share the personal data of 50,000 or more consumers, households, or devices, (down from 100,000 consumers or households in CA) or 3) derive at least 50% of annual revenue from the sale of personal data.
What about that other age-appropriate design code?
The Maryland AADC is the second “design code” bill to pass a U.S. state legislature, following the California AADC of 2022. However, FPF’s analysis finds that the Maryland AADC differs from its California predecessor in numerous critical ways. While the California AADC was slated to take effect on July 1, 2024, it was enjoined in September 2023 by the United States District Court for the Northern District of California (CA District Court) in Netchoice v. Bonta. The CA District Court held that plaintiffs were likely to succeed on their claim that several non-severable provisions of the California AADC violate the First Amendment.
While litigation on the California AADC is ongoing, proponents of a design code-style framework have claimed they can fix it in light of the questions of Constitutionality raised in the District Court’s preliminary injunction. The “AADC 2.0” framework emerged during the 2024 state legislative session in several states, including Vermont, Minnesota, New Mexico, and Maryland. Maryland is the first state to pass an AADC 2.0 bill, and the Maryland AADC will, therefore, likely be the subject of considerable analysis and debate over whether the First Amendment vulnerabilities that plagued the California AADC have been removed.
Fundamental changes from California AADC
No express age estimation mandate
One of the most significant changes to the Maryland AADC is that there is no express obligation for businesses to determine the age of individuals using a service. Under the California AADC, businesses would have been required to estimate the age of young users with “a reasonable level of certainty appropriate to the risks” arising from data management practices or, alternatively, provide strict privacy protections by default to all individuals regardless of age. Under present methods, accurately estimating the age of users with a high level of accuracy typically necessitates collecting additional personal information, such as government identifiers or facial scans. In granting a preliminary injunction of the California AADC, the CA District Court appeared greatly troubled by this age estimation requirements, noting that it was “likely to exacerbate” rather than alleviate any harm of insufficient data protections for children by requiring both children and adults to share additional personal information.
In 2023-2024, several other youth privacy laws with requirements to collect age information have similarly been enjoined by U.S. courts, often on First Amendment grounds. Given this consistent trend, it is unsurprising that the Maryland AADC would not include this requirement. Instead, the Maryland AADC solely relies on a “likely to be accessed by children” audience standard. Rather than collecting age information, a service will need to assess, using a variety of indicators, whether or not the service is likely to be used by children. Some factors appear to be modeled after the federal Children’s Online Privacy Protection Act’s (COPPA) similar “directed to children” standard, such as empirical evidence on audience composition or whether the online product features advertisements marketed to children. However, as a reminder, the Maryland AADC applies to children and teens up to 18. While businesses might have great familiarity with assessing whether advertisements appeal to children under 13 in complying with COPPA, doing this assessment for a 16 or 17-year-old might be less familiar and potentially more complicated.
Notably, the CA District Court in Bonta also observed that the age estimation provision of the California AADC was the “linchpin” of the law because knowing the age of users is critical for applying “age-appropriate” protections. For example, the Maryland AADC requires that privacy information and community standards be provided in a language suited to the age of children likely to access the service. Therefore, it remains an open question whether Maryland’s removal of this express requirement also erases any implicit obligations for collecting age information to serve the “age-appropriate” protections mandated by the bill.
Defining and upholding the “best interests of children”
While the Maryland AADC is an evolution of the California AADC, the California AADC is itself derived from the UK Age-Appropriate Design Code (UK AADC). A core component of the UK AADC is that businesses should consider the “best interests of the child” when designing and developing services online. The “best interests of the child” is a recognized concept adopted from the UN Convention on the Rights of the Child, of which the United States is the only country not to have ratified. In the United States, the “best interests of the child” typically is not an established legal standard outside of the family law context.
While the California AADC imported the “best interests of the child” language from the UK AADC, it did not include a definition. Under the California AADC, businesses would have been permitted to avoid certain obligations if they were able to demonstrate that their alternative course of action was consistent with the undefined “best interests of children.”
In contrast, the Maryland AADC establishes a quasi-‘duty of care’ that affirmatively obligates online services to act in the best interests of children. It goes on to scope “best interests of the child” by uses of a child’s data or design of an online product that will not result in 1) reasonably foreseeable and material physical or financial harm to children, 2) reasonably foreseeable and severe psychological or emotional harm to children, 3) a highly offensive intrusion on the reasonable privacy expectations of children, or 4) discrimination against children based upon race, color, religion, national origin, disability, gender identity, sex, or sexual orientation. As further explained below, this switch from using “best interests of the child” as a means to avoid obligations to instead creating affirmative obligations arguably makes the Maryland AADC less flexible in ways that could, for instance, disrupt or prevent children’s access to beneficial services.
Changes to data protection impact assessment (DPIA) obligations
The Maryland AADC, like the California AADC before it, requires businesses to conduct DPIAs to consider how online products will impact children. However, the Maryland AADC incorporates small but potentially impactful changes from its Californian predecessor. The District Court in Bonta took issue with the California AADC’s DPIA requirement for two reasons: 1) it did not address the harms it aimed to cure because the DPIAs addressed the risks arising from data management practices rather than the design of a service and 2) businesses were required to develop a plan to mitigate risks, there was no requirement actually to mitigate the risks. In light of this, the Maryland AADC requires that DPIAs include a description of steps the company has taken and will take to comply with the duty to act in the best interests of children.
The Maryland AADC also makes small changes to what harms or risks must be assessed. The California AADC required assessing whether the service could expose children to “harmful” or “potentially harmful” content, which in particular raised the ire of the news industry. Though the District Court did not reach this issue, the Maryland AADC removed any mentions of content, presumably to proactively address any concerns about First Amendment free speech issues. The Maryland AADC is also absent a requirement to assess harms related to targeted advertising. During the legislative session, drafters removed any mentions of targeted advertising from the bill. The exclusion of “targeted advertising” may be less a response to Bonta and more likely because the Maryland Online Data Privacy Act, which also creates heightened protections for children and teens, explicitly addresses targeted advertising.
Stricter processing restrictions
One area where the Maryland AADC arguably goes further than the California AADC is in placing more expansive default limitations on how businesses may process children’s data, which is defined to include everything from collecting, using, storing, and deleting personal information. The Maryland AADC would ban businesses from processing personal data that is not reasonably necessary to provide an online product with which the child is “actively and knowingly engaged.” While “actively and knowingly” is not defined, a strict reading would suggest that the bill forbids businesses from retaining any information about a child user beyond a single-use session, including basic details like account information and log-in credentials. This restriction would functionally deprive children of the ability to use many online products, services, and features. Even if future regulations or judicial holdings advance a more flexible interpretation of this restriction, it could significantly impact the ability of services to perform analytics, collect attribution data, or even receive health records from a parent or doctor.
Under California AADC, there was an exemption from this prohibition if the business could demonstrate a compelling reason that the processing was in children’s best interests. However, the Maryland AADC has no similar exemption. Instead, Maryland will prohibit any processing inconsistent with children’s best interests under a separate provision, so reconciling the processing restrictions under this law may prove challenging.
No mention of enforcing published terms
Unlike the California AADC and other state laws, the Maryland AADC does not require businesses to enforce their terms of service or other policies implemented under the law. By comparison, the California AADC would have required that businesses both publish and enforce “terms, policies, and community standards established by the business,” essentially giving the California Attorney General power to second guess core First Amendment-protected functions such as content moderation. While different in scope, Florida’s social media lawrecently heard in the Supreme Court similarly contained a requirement to enforce community standards that a District Court determined conflicted with a service’s First Amendment right to exercise editorial discretion. The absence of such a provision in the Maryland AADC may be explained by criticism of these other laws that pointed out that creating liability for services that fail to enforce published community guidelines may unintentionally incentivize platforms to lower community standards, leading to more harmful online spaces overall.
Conclusion
After the California AADC passed, some thought a flurry of similar legislation could be passed in other states. While a handful of states considered copycat legislation over the last two legislative sessions, none have ultimately been enacted, potentially due to the ongoing legal questions about that model’s constitutionality. Now that Maryland is pioneering this new “AADC 2.0” framework, stakeholders should be on high alert for new legal challenges and the potential for other states to consider and iterate upon this approach. If enacted, the Maryland AADC will go into effect on October 1, 2024 – coincidentally the same day the Connecticut Data Privacy Act’s recently passed heightened youth protections go into effect.
Future of Privacy Forum Partners on New National Science Foundation Large-Scale Research Infrastructure for Education
SafeInsights brings together digital learning platforms, institutions, and a world-class team to enable research studies to inform teaching and learning.
May 1, 2024 ― The Future of Privacy Forum (FPF) has received a subaward on the newly announced National Science Foundation (NSF) SafeInsights project, a five-year, $90 million research and development (R&D) infrastructure grant for inclusive education research. Led by OpenStax-Rice University, SafeInsights is a large-scale education research hub that will securely connect digital learning platforms and educational institutions to study learning across different contexts efficiently. This initiative represents the NSF’s largest single investment in R&D infrastructure for education at a national scale. SafeInsights will be the first national infrastructure of its kind and will deploy new techniques to ensure that research benefits are maximized while risk is minimized.
“Through this project, we’re excited to lend the Future of Privacy Forum’s expertise to help inform how researchers access rich learning data without compromising student privacy,” said John Verdi, FPF’s Senior Vice President for Policy. “Since its founding, FPF’s work has been driven by a belief that fair and ethical use of technology can improve people’s lives while safeguarding our privacy. SafeInsights’ model and directive will be critical to advancing the next generation of education research.”
SafeInsights includes a multidisciplinary network of 80 collaborating institutions and partners, including more than a dozen pioneering digital learning platforms that together reach tens of millions of students. The Future of Privacy Forum will collaborate with researchers and large-scale, digital learning platforms to enable privacy-preserving research studies to better understand student learning.
According to national polls conducted by the Data Quality Campaign, 86% of teachers see using educational data as an integral part of effective teaching. However, the majority of teachers must individually piece together strategies to interpret and use that data, often with limited resources.
“Better research leads to better learning. SafeInsights will enable a community of researchers to safely study large, diverse groups of students over time as they use different learning platforms,” said Richard Baraniuk, Rice professor, OpenStax founder, and project lead. “Researchers will be able to explore new ways to understand learning for students at all levels of education, which can lead to unprecedented discoveries and next-level innovations.”
“SafeInsights’ values of privacy and equity are perfectly aligned with those of the Future of Privacy Forum, an organization that has spent 15 years working to advance both in the digital realm,” said Shea Swauger, FPF’s Senior Policy Analyst for Data Sharing and Ethics. “We look forward to partnering on this important work, leveraging new technologies to ensure all students succeed.”
To learn more about SafeInsights and stay informed of future progress, please visit safeinsights.org.
About Future of Privacy Forum (FPF)
FPF is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks and develop appropriate protections. FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Follow FPF on X and LinkedIn.
Manipulative and Deceptive Design: New Challenges in Immersive Environments
With help from Selin Fidan, Beth Do, Daniel Berrick, and Angela Guo
Immersive technologies like spatial computing, gaming, and extended reality (XR) offer exciting ways to experience and engage with the world. However, interfaces for immersive technologies that further blur the lines between the physical and the virtual may also open the door to new, potentially more effective types of manipulative and deceptive design. Although scholars, regulators, and lawmakers have begun addressing so-called “dark patterns” in traditional online spaces, it is critical to understand the ways these design practices may manifest differently across different mediums, particularly in novel interfaces. Being able to identify how different choices may constitute manipulative and deceptive design in immersive environments is a key first step toward ensuring that new products and services are designed and built in a way that protects against harmful effects on privacy, security, safety, and competition.
“Dark Patterns”: A Primer
Design choices can significantly impact how information is presented. Organizations often utilize lawful, persuasive design choices in order to make their products or services look appealing or inform individuals about their features. Manipulative and deceptive design, by contrast, refers to the practice of designing a service or application in a way that leads users toward decisions or behavior they may not have otherwise chosen, often in a way that does not serve their best interests. For example, using intentionally confusing wording or emotionally charged language to trick users into sharing more information would likely be considered manipulative or deceptive.
The line distinguishing merely persuasive design practices from “manipulative” or “deceptive” ones can be ambiguous. To some extent, all interfaces, whether online or in the physical world, will steer user behavior or constrain their choices, and determining when it becomes unacceptable is a matter of open debate. Although scholars, practitioners, and regulators have developed taxonomies for defining and classifying so-called “dark patterns,” crafting appropriately scoped legislation and regulations that prevent harmful practices without restricting reasonable design practices is challenging.
In the context of data protection, regulations related to manipulative and deceptive design often focus on consent flows for data collection and use. Because manipulative and deceptive design practices may facilitate consent that is not truly “informed” or “freely-given,” regulators have indicated that these practices threaten the notice and consent regime underpinning most U.S. privacy law. Not only do such practices undermine individual autonomy and potentially cause direct harm, they may also create market distortions that hurt competition and limit user options. Beyond practices that obscure or subvert privacy choices, the Federal Trade Commission (FTC)—which has led enforcement against “dark patterns” as part of its FTC Act Section 5 authority—has also specifically drawn attention to design elements that induce false beliefs, hide or delay important information, lead to unauthorized charges, and make it difficult to cancel subscriptions.
New Manifestations of Manipulative and Deceptive Design in Immersive Environments
Many of the manipulative and deceptive design practices that exist in traditional web and mobile environments can also be found in immersive environments, and organizations operating in this space should be careful to avoid them. XR and virtual world applications, for example, are just as prone to practices such as visual interference and nagging as traditional online spaces. However, immersive technologies’ unique qualities and characteristics may also open the door to new, potentially more effective forms of manipulation and deception.
Some aspects of immersive technologies that may lend themselves to new or stronger manipulative design include heightened realism and blending of virtual and physical elements. These characteristics could make it easier to subtly alter a person’s perception of reality or convince them to engage in certain behavior, particularly when combined with advanced forms of AI that closely mirror human behavior or genuine experiences. Additionally, immersive technologies’ collection and aggregation of large amounts of personal data, including novel data types like eye gaze, creates further privacy risks. Often, this will involve data types and uses with which users are unfamiliar, putting them at an information disadvantage when making decisions about how to engage with applications. Finally, immersive technologies often provide individuals with novel interfaces and modes of interaction, as well as increasingly realistic AI-generated content, making immersive environments particularly conducive to manipulative or deceptive design patterns. While immersive interfaces may, if done correctly, help improve user education and facilitate more informed consent, they could also be exploited to trick users.
Examples of potential manipulative design in immersive environments: blocking important disclosure information with design elements. Source: Wang, Lee, Hernandez, & Hui
What makes immersive technologies so powerful in healthcare, education, and entertainment contexts may also make them more prone to manipulative use. The immersive elements of these technologies described above, in addition to the ability to create multi-modal experiences combining visuals, audio, text, and even haptics, present more opportunities for enhanced persuasion, as well as more mechanisms through which a motivated actor could obscure, hide, or misrepresent information to a user. Devices like neurotechnologies that can access an individual’s brain activity, for example, may allow bad actors to not only analyze their mental state, but potentially alter it as well. To avoid unintentionally deploying a product or service with a deceptive design element, organizations should design disclosures in a way that harnesses immersive technologies’ strengths and provides effective user education about new data types and uses. Organizations should invest in ensuring that regulators and the general public are able to develop a practical understanding of how sophisticated manipulative and deceptive design techniques can emerge in immersive spaces, given novel technological capabilities and data sources. While some researchers have begun studying manipulation in immersive technologies, more research will also be needed to develop both theoretical and empirical accounts of the mechanisms by which users are manipulated or deceived. Table 1 below illustrates what such practices could include.
Table 1: Potential manipulative and deceptive design in immersive technologies
Manipulative or deceptive design practices
Examples
Driving users towards certain behavior, or blocking from behavior, using design patterns, lighting, sound, or haptics, in a way that is not in the user’s best interest.
Directing users’ attention away from an important notice by causing controllers to rumble in certain ways at certain times.
Using lighting, interface design, or data about where a user is looking to hide or obscure relevant information, or make certain desired behaviors more likely.
Using eye gaze data to determine where a user is looking, and placing a privacy disclosure out of their view. Or, using lighting in the physical world to block part of a virtual disclosure box, preventing a user from opting out of data collection or use.
Using immersive technology’s heightened realism and immersion to play on certain emotions or associations to persuade a user to do or not do something.
Having avatars of a user’s loved ones deliver messages or endorsements.
Digitally presenting a product or service in a misleading way, deceiving the user and causing them to make a purchase when they may not have, had they been presented a more accurate representation.
Virtual try-on application depicting a product in an inaccurate way (false advertising).
Using personal data to make inferences about a user’s mental state for the purpose of getting the user to engage in an action when they are most vulnerable.
Inferring when a user is upset, based on personal data, and targeting them with particular ads or asking to divulge more data.
Altering or editing elements of the physical world with digital content in order to change a user’s perception in a harmful way.
Superimposing a brand, logo, or message onto a person, physical object, or location without consent.
Pushing users towards certain physical locations that might be in the designer’s best interest but not necessarily the user’s.
Using eye gaze data or haptics to direct a user towards a location for advertising purposes.
Generative AI Increases Risks for Manipulative and Deceptive Design in Immersive Tech
Immersive, data-rich environments may also be fertile ground for AI-driven agents that create highly targeted influence campaigns tailored for each person based on large amounts of their personal data, and responsive to their behavior in real time. A study by XR pioneer Jeremy Bailenson demonstrated that when political candidates’ faces were subtly edited to make them look more like the study participant, they were more likely to vote for that candidate. A motivated actor armed with intimate user data and powerful AI tools could exploit these human tendencies in order to sway elections, undermine consumer autonomy, or sow disinformation. The combination of these two powerful technologies—AI that can learn about people in real-time, and immersive technologies that convince the body that a virtual experience is actually physical—could supercharge the effectiveness of manipulative and deceptive design practices.
Regulating Manipulative and Deceptive Design in Immersive Environments
Although there is no federal law against manipulative or deceptive design, the FTC has authority under Section 5 of the FTC Act to protect people from “unfair” and “deceptive” acts and practices. It has used this authority to go after alleged “dark patterns” that cause or are likely to cause unavoidable harm that isn’t outweighed by other benefits (see Table 2 below for examples). The FTC looks for particular patterns when determining whether a given practice is manipulative or deceptive. It also enforces a number of general consumer protection laws and regulations that may regulate manipulative or deceptive design, such as the Restore Online Shoppers’ Confidence Act (ROSCA), Controlling the Assault of Non-Solicited Pornography and Marketing Act (CAN-SPAM), Telemarketing Sales Rule (TSR), Truth in Lending Act (TILA), Children’s Online Privacy Protection Act (COPPA), and Equal Credit Opportunity Act (ECOA).
Table 2: FTC manipulative or deceptive design cases and enforcement actions
Allegedly created a “panoply of hurdles” for consumers to cancel recurring service plans, charged these consumers with a previously undisclosed “early termination fee,” and frequently continued charging subscription fees after cancellation.
Vonage must provide consumers with a cancellation option that is easy to find and use.
According to the FTC, used both explicit false promises and hard-to-navigate interfaces to deceptively induce consumers to part with money or personal data, waste their time, and cause frustration.
Tapjoy must not make misrepresentations about consumer rewards.
Allegedly touted easy cancellation in promotional material, made it difficult to cancel subscriptions by providing circular forms, and auto-renewed subscriptions at the most expensive level without consumer notice or consent.
Age of Learning must not misrepresent the ease of cancellation or recurring charges, must obtain affirmative consent for renewals, and must provide a simple cancellation interface.
The FTC alleged Epic Games charged consumers for in-game purchases, including accidentally-made purchases, without consent, and banned consumers from accessing content they paid for when they disputed these charges to their credit card companies.
Epic must not charge consumers without receiving consent, provide a simple mechanism to revoke consent for charges, not deny consumers access to their account for disputing charges, and pay a civil penalty.
According to the FTC, used manipulative phrasing and website design to mislead consumers about how to enter the company’s sweepstakes drawings, making them believe a purchase was necessary to win or would increase their chances of winning.
PCH may not make misleading claims, must make clear disclosures, end surprise fees, stop deceptive emails, and destroy some consumer data, among other things.
According to the FTC, made it easy to sign up for membership but used a number of strategies—including confusing navigation, a variety of screens, additional offers, a multiple choice survey—to make it difficult to cancel.
Proposed order would require Bridge It to make disclosures about and obtain consent for negative option programs, provide a simple mechanism to cancel, and pay a civil penalty.
The FTC alleged Floatme intentionally used design patterns to make it difficult for consumers to cancel subscriptions, and continued to offer an error-filled cancellation process even after consumer complaints.
Floatme must get consent for charges and provide an easy cancellation method.
In addition to the FTC’s authority, manipulative and deceptive design is also regulated by provisions on “dark patterns” in certain state laws covering privacy, safety, and “unfair or deceptive acts or practices” (UDAP). Most state comprehensive privacy laws specifically prohibit using “dark patterns” to obtain user consent1, generally defining these as “user interface[s] designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision making, or choice.” This language is drawn from the Deceptive Experiences To Online Users Reduction (DETOUR) Act, a federal bill introduced in 2021. Although it did not pass, the DETOUR Act laid the groundwork for “dark patterns” provisions not just in state privacy laws but also the California Age Appropriate Design Code, the American Data Privacy and Protection Act (ADPPA), and the American Privacy Rights Act (APRA).
FPF has provided more detailed analysis on the DETOUR Act and its progeny HERE.
In draft federal and state legislation, lawmakers are also increasingly seeking to restrict manipulative and deceptive design beyond the context of consent, such as design that encourages compulsive use of a product or service. For example, legislation has been introduced to target particular types of manipulative or deceptive design, or particular audiences like children, such as:
Social Media Addiction Reduction Technology (SMART) Act, a federal bill that would target addiction and “psychological exploitation” on social media.
Vermont’s Age Appropriate Design Code, which would prohibit “low-friction variable reward” designs like endless scroll and autoplay.
New York’s Child Data Privacy and Protection Act, which would grant the Attorney General the authority to ban features deemed to “inappropriately amplify the level of engagement” of a child user.
While “dark patterns” regulations will likely apply to certain practices in immersive technologies, questions remain about whether they adequately address the risks, or what impact they could have on product design. Practices that might benefit users in certain situations—such as using eye tracking to hide scene changes in order to create smoother, more enjoyable experiences—may, in other situations, manipulate or deceive users by hiding important information about privacy. With the blunt instrument of law, it may be difficult to single out only the practices that could cause harm without preventing innocuous or beneficial practices. It’s also not clear that “dark patterns” regulation, confined to the context of consent, provides any additional protections that aren’t already covered by UDAP or privacy laws, which have high standards for what constitutes proper consent. At the same time, the focus on consent, to the exclusion of other instances of manipulative or deceptive design, may ignore harmful design practices that don’t involve consent. These policy scoping questions will only become more germane as AI, neurotechnology, smart devices, and other emerging technologies pose new opportunities for manipulative and deceptive design.
Conclusion and Recommendations
Organizations deploying immersive technologies must recognize that the heightened realism, immersivity, and reliance on personal data may lead to new, potentially more powerful forms of manipulative and deceptive design, and take steps to proactively address their risks. In addition to instituting best practices to avoid manipulative and deceptive design, organizations should also create internal processes for monitoring and responding to complaints of such practices.
Organizations deploying immersive tools aren’t the only ones who will need to take proactive steps here. Researchers in both academia and industry should familiarize themselves with technologies and, specifically, how immersive environments may be particularly conducive to manipulative and deceptive design practices, and begin developing best practices for preventing them. Policymakers as well as regulators, such as the FTC and those tasked with enforcing consumer protection law, should also stay up to date on the latest research about immersive technologies and ways that they are used by individuals as well as unintended adverse impacts of any legislative or regulatory measure.
California, Colorado, Connecticut, Delaware, Montana, New Jersey, Texas, and New Hampshire all prohibit the use of “dark patterns” to obtain consent. Oregon has a similar prohibition but does not use the term “dark patterns.” Florida’s “Digital Bill of Rights,” while not technically a comprehensive privacy law, uses the same language to prohibit “dark patterns” in obtaining consent, as well as for other practices in regards to children. A number of laws also point to the FTC’s conception and taxonomy of “dark patterns.” ↩︎
Setting the Stage: Connecticut Senate Bill 2 Lays the Groundwork for Responsible AI in the States
NEW: Read Tatiana Rice’s op-ed in the CT Mirror on SB2
Last night, on April 24, the Connecticut Senate passed SB 2, marking a significant step toward comprehensive AI regulation in the United States. This comprehensive, risk-based approach has emerged as a leading state legislative framework for AI regulation. If enacted, SB 2 would stand as the first piece of legislation in the United States governing the private-sector development and deployment of AI with comparable scale to the EU AI Act. The law would become effective February 1, 2026.
FPF has released a new Two-Pager Fact Sheet that summarizes core components of CT SB 2 pertaining to private-sector regulation.
“Connecticut Senate Bill 2 is a groundbreaking step towards comprehensive AI regulation that is already emerging as a foundational framework for AI governance across the United States. The legislation aims to strike an important balance of protecting individuals from harms arising from AI use, including creating necessary safeguards against algorithmic discrimination, while promoting a risk-based approach that encourages the valuable and ethical uses of AI. We look forward to continuing to work with Sen. Maroney and other policymakers in the future to build upon and refine this framework, ensuring it reflects best practices and is responsive to the dynamic AI landscape.”
–Tatiana Rice, Deputy Director for U.S. Legislation
At a high level, here’s our summary of the bill’s most significant private-sector provisions:
Scope: The bill’s private-sector provisions primarily regulate developers and deployers of high-risk AI systems, i.e. those used to make, or are a substantial factor in making, consequential decisions regarding education, employment, financial or lending services, healthcare, or other important life opportunities. There are small business exceptions for deployers in certain circumstances. The bill also contains requirements for any person or entity deploying an artificial intelligence system that interacts with individuals to disclose to the person that they are engaging with an AI system and watermark AI-generated content.
Developer and Deployer Obligations: Both developers and deployers of high-risk AI systems would be subject to a duty of reasonable care to avoid algorithmic discrimination and issue a public statement regarding the use or sale of high-risk AI systems. Developers would also need to provide certain disclosures and documentation to deployers, including information regarding intended use, data used to train the system and risk mitigation measures. Deployers would be required to maintain a risk management policy, conduct impact assessments on high-risk AI systems, and ensure consumers are provided their relevant rights.
Individual Rights: Individuals must be provided notice before a high-risk AI system is used to make, or be a substantial factor in making, a consequential decision. If an adverse consequential decision is made, individuals have a right to an explanation of how the high-risk AI system came to its conclusion, including the personal data used to render the decision, the right to correct the personal data used to render the decision, and the right to appeal the decision for human review. If a deployer is also a controller under the Connecticut Data Privacy Act (CTDPA), they also must inform individuals of their rights under the CTDPA, including the right to opt-out of profiling in furtherance of solely automated decisions.
Enforcement: The Attorney General would have the sole authority to enforce provisions of the bill, though the bill explicitly does not supersede existing authority of other state agencies to enforce against discrimination, including the Connecticut Commission on Human Rights and Opportunities (CHRO). However, the Attorney General may not bring an action for claims otherwise being brought by the CHRO for the same conduct. Developers and deployers would have a 60-day right to cure any alleged violations until June 30, 2026.
Compliance and Reciprocity: After the bill becomes enacted, entities would have almost two years to come into compliance with the Act. If an entity is otherwise in compliance with the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework or another nationally or internationally recognized risk management framework, they may assert so as an affirmative defense.
Beyond the bill’s private-sector regulations, SB 2 also creates a new task force to create recommendations regarding the regulation of generative and general-purpose AI, and contains provisions regarding AI-generated non-consensual intimate images, deepfakes in political communications, workforce development, and public-private partnerships, amongst other topics.
FPF will continue to track the bill’s developments in the coming weeks. Follow FPF on Twitter/X for the latest updates.
FPF Develops Checklist & Guide to Help Schools Vet AI Tools for Legal Compliance
FPF’s Youth and Education team has developed a checklist and accompanying policy brief to help schools vet generative AI tools for compliance with student privacy laws. Vetting Generative AI Tools for Use in Schools is a crucial resource as the use of generative AI tools continues to increase in educational settings. It’s critical for school leaders to understand how existing federal and state student privacy laws, such as the Family Educational Rights and Privacy Act (FERPA) apply to the complexities of machine learning systems to protect student privacy. With these resources, FPF aims to provide much-needed clarity and guidance to educational institutions grappling with these issues.
“AI technology holds immense promise in enhancing educational experiences for students, but it must be implemented responsibly and ethically,” said David Sallay, the Director for Youth & Education Privacy at the Future of Privacy Forum. “With our new checklist, we aim to empower educators and administrators with the knowledge and tools necessary to make informed decisions when selecting generative AI tools for classroom use while safeguarding student privacy.”
The checklist, designed specifically for K -12 schools, outlines key considerations when incorporating generative AI into a school or district’s edtech vetting checklist.
These include:
assessing the requirements for vetting all edtech;
describing the specific use cases;
preparing to address transparency and explainability; and
determining if student PII will be used to train the large language model (LLM).
By prioritizing these steps, educational institutions can promote transparency and protect student privacy while maximizing the benefits of technology-driven learning experiences for students.
The in-depth policy brief outlines the relevant laws and policies a school should consider, the unique compliance considerations of generative AI tools (including data collection, transparency and explainability, product improvement, and high-risk decision-making), and their most likely use cases (student, teacher, and institution-focused).
The brief also encourages schools and districts to update their existing edtech vetting policies to address the unique considerations of AI technologies (or to create a comprehensive policy if one does not already exist) instead of creating a separate vetting process for AI. It also highlights the role that state legislatures can play in ensuring the efficiency of school edtech vetting and oversight and calls on vendors to be proactively transparent with schools about their use of AI.
Check out the LinkedIn Live with CEO Jules Polonetsky and Youth & Education Director David Sallay about the Checklist and Policy Brief.
To read more of the Future of Privacy Forum’s youth and student privacy resources, visitwww.StudentPrivacyCompass.org.
The Old Line State Does Something New on Privacy
On April 6, the Maryland Senate concurred with House amendments to SB 541, the Maryland Online Data Privacy Act (MODPA), sending the bill to Governor Moore for signature. If enacted, MODPA could be a paradigm-shifting addition to the state privacy law landscape. While recent state comprehensive privacy laws generally have added to the existing landscape in an iterative fashion by making adjustments to the popular Washington Privacy Act (WPA) framework, MODPA is a significant departure from the status quo. Infused with elements derived from the 2022 proposed federal privacy bill, the American Data Privacy and Protection Act of 2022 (ADPPA), MODPA includes novel provisions concerning data minimization, civil rights, and more. In light of these significant substantive differences, there is an argument that MODPA should be regarded as a distinct third model for state comprehensive privacy laws.
In this blog post, we highlight 10 things to know about MODPA that set Maryland apart in the state privacy law landscape.
1. Novel Data Minimization Rules Create Potential Tension with Purpose Limitation Rule
MODPA’s approach to data minimization—default limitations on the ability to collect personal data—sets Maryland apart in the state privacy landscape. Prior to MODPA, state privacy laws typically restricted the collection and use of personal data to what is adequate, relevant, and reasonably necessary in relation to the disclosed purposes for which the data is processed. California, in its regulations, follows a different rule that provides that purposes for which personal information is collected or processed must be consistent with individuals’ reasonable expectations and that collection and processing must be limited to what is reasonably necessary and proportionate to achieve a disclosed purpose.
MODPA establishes a new data minimization framework that places default limitations on both the collection and the processing of personal data. Influenced by the ADPPA, MODPA provides that a controller shall “limit the collection of personal data to what is reasonably necessary and proportionate to provide or maintain a specific product or service requested by the consumer to whom the data pertains.” This is a substantive limit on the purposes for which a controller may collect personal data. When it comes to processing more broadly, however, MODPA includes the standard purpose limitation rule seen in a majority of the states—unless a controller obtains consent, the controller shall not “process personal data for a purpose that is neither reasonably necessary to, nor compatible with, the disclosed purposes for which the personal data is processed, as disclosed to the consumer.”
The distinct standards for “collection” and “processing” create a potential tension between these rules, given that “process” is defined to include “collecting,” which could be read to mean that a controller can collect personal data when not reasonably necessary if the controller obtains consent.
With respect to sensitive data (which, as discussed below, is defined broadly), MODPA again establishes new substantive limits that differ from those in other states. Under MODPA, controllers are prohibited from collecting, processing, or sharing sensitive data except where the collection or processing is “strictly necessary to provide or maintain a specific product or service requested by the consumer to whom the personal data pertains.” This is different from the states’ existing approaches—California allows individuals to opt-out of unnecessary sensitive data processing, whereas most other states require opt-in consent for sensitive data processing.
This new data minimization paradigm has at least three significant ambiguities:
What are the criteria for assessing when collection, processing, and sharing are ‘reasonably’ or ‘strictly’ necessary?
What does it mean to provide or maintain a product or service?
What does it mean for a product or service to be ‘specifically requested’ by a consumer?
The answers to these questions will have significant impact on businesses, especially with respect to back-end data uses that are not apparent in a business-customer relationship, such as product improvement and the launch of new products and features.
This new paradigm also increases the importance of exceptions and limitations to the law, given that controllers will now face stronger limits on the purposes for which they can collect or process personal data. Section 14–4612, for example, preserves controllers’ and processors’ ability to collect, use, or retain personal data for certain internal uses, such as identifying and repairing technical errors or performing internal operations that are either (1) “reasonably aligned with” the consumer’s reasonable expectations or can be “reasonably anticipated based on the consumer’s existing relationship with the controller,” or (2) compatible with processing data in furtherance providing a specifically requested product or service or performance of a contract. Even if a controller or process is relying on an exception to justify a processing activity, however, that processing must still be both “reasonably necessary and proportionate” to the excepted purpose and “adequate, relevant, and limited to what is necessary in relation to the specific purpose listed.”
In adopting these data minimization provisions, Maryland has forged a new path in state privacy law. This approach could provide significant protections for individuals by limiting the collection and use of personal data to purposes that more closely align with reasonable expectations. On the other hand, this approach could foreclose certain socially beneficial and low-risk processing activities that are ancillary to the business-consumer relationship. As stakeholders wait to see the full impact of this approach develop over time, all eyes will be on other state legislatures currently considering similar such standards.
2. Prohibitions against Selling Sensitive Data, Targeted Ads to Minors, and Selling Minors’ Personal Data
MODPA’s strong data minimization rules are supplemented by additional prohibitions on specific processing activities, including:
selling sensitive data (defined broadly to include exchanges for non-monetary valuable consideration);
processing the personal data of an individual for the purpose of targeted advertising if the controller knew or should have known that the individual is under the age of 18; and
selling the personal data of an individual if the controller knew or should have known that the individual is under the age of 18.
These are flat prohibitions with no specific opt-in consent alternatives. The “should have known” standard for minors’ data also differs from the “wilfully disregards” standard included in other state laws and could arguably be interpreted as requiring age-gating of online products and services, as explored by Husch Blackwell’s David Stauss. These prohibitions are still subject to the exceptions to MODPA found in Section 14–4612, such as the performance of a contract to which a consumer is a party.
3. Novel Civil Rights Protection Applicable to Processing Publicly Available Data
State privacy laws typically prohibit controllers from processing personal data in violation of state or federal laws that prohibit unlawful discrimination. MODPA incorporates an additional civil rights protection derived from the ADPPA that prohibits controllers from collecting, processing, or transferring personal data or publicly available data in a manner that unlawfully discriminatesin or otherwise unlawfully makes unavailable the equal enjoyment of goods or services on the basis of race, color, religion, national origin, sex, sexual orientation, gender identity, or disability,” subject to limited exceptions (including self-testing to prevent or mitigate unlawful discrimination and diversifying an applicant or customer pool). One thing to note is that this provision uses the undefined term “publicly available data” rather than the defined term “publicly available information.” Assuming the drafters meant publicly available information, including processing of that data in this provision is notable given that publicly available information is generally outside the scope of the bill and other state privacy laws. Another notable aspect of this prohibition is that it only prohibits unlawful discrimination, which is potentially a higher threshold than other potential standards, such as all discrimination or unjustified differential treatment.
4. Heightened Protections for Consumer Health Data
2023 was notable for a rise in consumer health privacy laws, including the enactment of the Washington My Health My Data Act (WMHMDA) and the Nevada Consumer Health Data Privacy Law. Connecticut also introduced a novel requirement in 2023 when it passed SB 3, which amended the state’s nascent comprehensive privacy law to include expanded protections for “consumer health data” above and beyond what was already covered by its definition of sensitive data. MODPA incorporates Connecticut-style protections for consumer health data, which it defines as “personal data that a controller uses to identify a consumer’s physical or mental health status” and which includes data related to “gender-affirming treatment or reproductive or sexual health care. Unlike CT SB 3, however, it appears that under MODPA a person must meet the applicability thresholds of the Act to be subject to these provisions. Additionally, because consumer health data is included in the definition of sensitive data, the minimization rule limiting the collecting, processing, or sharing of sensitive data to what is “strictly necessary” to provide or maintain a product of service applies to consumer health data as well. This could mean that MODPA creates stricter requirements for the use of most health information than WMHMDA, which has an opt-in consent alternative to its “necessary” health data processing standard. For more on WMHMDA’s necessity standard, see this recent analysis from Hintze Law’s Kate Black and Felicity Slater and FPF’s Jordan Wrigley and Niharika Vattikonda.
5. Data Protection Assessments May Have Narrower Applicability but Broader Scope
Like most state privacy laws, MODPA will require a controller to conduct and document a data protection assessment (DPA) for each of their processing activities that “present a heightened risk of harm to a consumer.” MODPA’s requirements for conducting a DPA, however, contain a number of unique provisions that could require covered entities to rework their internal strategies for conducting assessments:
Exclusive: Like many states based on the WPA framework, MODPA requires a DPA for processing personal data for targeted advertising, sale of personal data, processing sensitive data, and processing personal data for profiling that presents a reasonably foreseeable risk of certain enumerated harms. However, in contrast to those other states, MODPA provides that heightened risk of harm “means” those activities rather than “includes” those activities. MODPA thus has an exclusive rather than inclusive standard for when a DPA is required, and therefore, the scope of when a DPA is required could be narrower than under other laws.
Algorithms: Under MODPA, a controller shall conduct DPAs for processing activities that present a heightened risk of harm, “including an assessment for each algorithm that is used.” This requirement is novel and, if read strictly (a definition of “algorithm” is not provided), could require covered organizations to conduct hundreds or thousands of assessments.
Necessity & Proportionality: MODPA contains a novel DPA provision that requires controllers to consider “the necessity and proportionality of processing in relation to the stated purpose of the processing.” This requirement ties back to the general data minimization rule that collection of personal data must be “reasonably necessary and proportionate to provide or maintain a specific product or service requested.”
6. Broad and Divergent Definitions
MODPA’s definitions contain a number of unique and divergent definitions compared to other state privacy laws, including—
Biometric Data: The definition of biometric data in MODPA is broad, encompassing data that can be used to uniquely identify a consumer’s identity. This differs from most state privacy laws which instead limit biometric data to include only data that are, or are intended to be, used to identify an individual.
Decisions that Produce Legal or Similarly Significant Effects: MODPA follows the majority of states in allowing individuals to opt out of solely automated profiling in furtherance of decisions that produce legal or similarly significant effects, but MODPA does not include decisions relating to insurance in that definition.
De-identified data: MODPA cross-references the Maryland Genetic Information Privacy Act to define de-identified data. Although that definition is substantially similar to the language found in a majority of state comprehensive privacy laws, it is not identical because it does not address data that can reasonably be used to infer information about or otherwise be linked to a device that may be linked to an identified or identifiable consumer.
Publicly Available Information: MODPA incorporates Utah’s three-part definition of publicly available information, which, in contrast to narrower definitions in states like Connecticut or Delaware, includes information obtained from a person to whom the consumer disclosed the information if the consumer did not restrict that information to a specific audience. Although this broader definition generally exempts more data from coverage under the bill than under other laws, publicly available information is still subject to MODPA’s novel civil rights protection highlighted above. Publicly available information does not include biometric data collection by a business without a consumer’s knowledge.
Sale of Personal Data: MODPA broadens the definition of sale to explicitly include exchanges of personal data to third parties by processors and affiliates of controllers or processors.
Sensitive Data: MODPA’s definition of sensitive data includes many elements seen in laws enacted in recent years (such as data revealing sex life, sexual orientation, or status as transgender or nonbinary). It is also broader than other states’ definitions in a few ways.
In contrast to Connecticut, sensitive data includes data revealing consumer health data (rather than “is” consumer health data).
Sensitive data includes biometric data which, as specified above, is broader than in other state laws.
Sensitive data includes personal data “of a consumer that the controller knows or has reason to know is a child.” This differs from “known child” language seen in other states.
MODPA will apply to persons that either (1) control or process the personal data of at least 35,000 consumers during a calendar year, excluding data processed solely for the purpose of completing a payment transaction, or (2) control or process the personal data of at least 10,000 individuals and derive more than 20% of gross revenue from the sale of personal data. These thresholds are uniquely low relative to Maryland’s population of 6.2 million. For comparison, Colorado has a similar population of 5.9 million but sets thresholds of 100K and 25K, whereas Delaware has similar thresholds of 35K and 10K but a total population of only 1 million.
In addition to the low applicability thresholds, MODPA includes notable entity-level and data-level exemptions. MODPA includes an entity-level exemption for financial institutions and affiliates (and data) subject to GLBA. Additionally, although nonprofits are generally subject to MODPA, there is a specific exemption for non-profits that process or share personal data solely for the purpose of assisting either law enforcement in investigating insurance crime or fraud or “first responders in responding to catastrophic events.” MODPA includes data-level exemptions for data subject to HIPAA, FCRA, FERPA, and personal data collected by or on behalf of a person subject to Maryland’s Insurance article “in furtherance of the business of insurance.”
8. No Fraud Exception for Complying with Opt-out Requests
The Act provides relatively standard consumer rights of access, correction, deletion, portability, and to opt-out of targeted advertising, sales of personal data, and solely automated profiling in furtherance of decisions with legal or similarly significant effects. Unlike other state laws, however, MODPA does not give controllers an explicit right to reject opt-out requests that are suspected to be fraudulent.
9. Enforcement is Vested in the Attorney General, but Other Remedies Provided by Law Are Not Foreclosed
Violations of MODPA are tied to the Maryland Consumer Protection Act, and the Act specifically denies private enforcement under Md. Code Com. Law § 13-408, leaving enforcement solely with the Division of Consumer Protection of the Office of the Attorney General. However, the Act specifies that “[t]his section does not prevent a consumer from pursuing any other remedy provided by law.” This language differs from that seen in other states, some of which say that nothing in the law shall be construed as providing the basis for a private right of action for violations of that law “or any other law.” This provision thus could be interpreted as allowing individuals to bring private suits for violations under other causes of action. Similar concerns were raised by industry members when New Jersey enacted S332 in January.
10. Notice Required for Third-Party Use Inconsistent with Past Promises
MODPA contains a novel provision requiring that “[i]f a third party uses or shares a consumer’s information in a manner inconsistent with the promises made to the consumer at the time of collection . . . , the third party shall provide an affected consumer with notice of the new or changed practice before implementing the new or changed practice,” so as to allow a consumer to exercise their rights under the Act. The scope of this provision is ambiguous as the Act neither defines information nor specifies when a third party’s use or sharing of information is inconsistent with promises made to an individual. Additionally, the notice provision does not specify any requirements with respect to consent (such as allowing an individual to revoke previously given consent).
Conclusion
MODPA could portend a paradigm shift in state privacy laws if policymakers in other states follow suit and venture towards rules that impose default limitations on companies’ ability to collect and use personal data. Much will depend on how MODPA’s novel provisions are interpreted. As David Stauss identified in his analysis of MODPA, the Maryland Attorney General has inherent, permissive rulemaking authority with respect to unfair or deceptive trade practices, so it is possible that clarifying regulations could be issued to guide compliance.
On April 6, Maryland became the second state to pass an Age-Appropriate Design Code when the Maryland Senate concurred with House amendments to SB 571. That bill, if enacted by the Governor, will take effect on October 1, 2024, a year before MODPA would take effect. Stay tuned for FPF’s forthcoming analysis of the Maryland Age-Appropriate Design Code Act.
China’s Interim Measures for the Management of Generative AI Services: A Comparison Between the Final and Draft Versions of the Text
Authors: Yirong Sun and Jingxian Zeng
Edited by Josh Lee Kok Thong (FPF) and Sakshi Shivhare (FPF)
The following is a guest post to the FPF blog by Yirong Sun, research fellow at the New York University School of Law Guarini Institute for Global Legal Studies at NYU School of Law: Global Law & Tech and Jingxian Zeng, research fellow at the University of Hong Kong Philip K. H. Wong Centre for Chinese Law. The guest blog reflects the opinion of the authors only. Guest blog posts do not necessarily reflect the views of FPF.
On August 15, 2023, the Interim Measures for the Management of Generative AI Services (Measures) – China’s first binding regulation on generative AI – came into force. The Interim Measures were jointly issued by the Cyberspace Administration of China (CAC), along with six other agencies, on July 10, 2023, following a public consultation on an earlier draft of the Measures that concluded in May 2023.
This blog post is a follow-up to an earlier guest blog post, “Unveiling China’s Generative AI Regulation” published by the Future of Privacy Forum (FPF) on June 23, 2023, that analyzed the earlier draft of the Measures. This post compares the final version of the regulation with the earlier draft version and highlights key provisions.
Notable changes in the final version of the Measures include:
A shift in institutional dynamics, with the CAC playing a less prominent role;
Clarification of the Measures’ applicability and scope;
Introduction of responsibilities for users;
Introduction of additional responsibilities for providers, such as taking effective measures to improve the quality of training data, signing service agreements with registered users, and promptly addressing illegal content;
Assignment of responsibilities to government agencies to strengthen the management of generative AI services; and
Introduction of a transparency requirement for generative AI services, in addition to the existing responsibilities for providers to increase the accuracy and reliability of generated content.
Introduction
The stated purpose of the Measures, a binding administrative regulation within the People’s Republic of China (PRC), is to promote the responsible development and regulate the use of generative AI technology, while safeguarding the PRC’s national interests and citizens’ rights. Notably, the Measures should be read in the context of other Chinese regulations addressing AI and data, including the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and the Law on Scientific and Technological Progress.
Central to the Measures is the principle of balancing development and security. The Measures aim to encourage innovation while also addressing potential risks stemming from generative AI technology, including manipulation of public opinion and disseminate sensitive or misleading information at scale. The Measures also:
Address a range of societal concerns, including data breaches, fraudulent activities, privacy violations, and intellectual property infringements,
Provide mechanisms for oversight inspections, the right to file complaints, and penalties for non-compliance, and
Coordinate different stakeholders involved in generative AI.
The next section provides some context on the finalization process of the Measures.
The final Measures were shaped significantly by private and public input
The initial draft of the Measures was released for public consultation on April 11, 2023. Following the conclusion of the consultation period on May 10, 2023, the final version of the Measures received internal approval from the CAC on May 23, 2023, and were subsequently made public on July 10, 2023 before formally coming into force on August 15, 2023.
Several significant changes in the final version of the Measures appear attributable to feedback from various from industry stakeholders and legal experts. These industry stakeholders and legal experts include leading tech and AI companies such as Baidu, Xiaomi, SenseTime, YITU, Megvii, and CloudWalk, as well as research institutes affiliated with authorities such as the MIIT. The stakeholders’ input, including public statements on the draft Measures (which were referred to in FPF’s earlier guest blog) appear to have played a role in influencing the revisions made in the final version of the Measures.
In addition, certain changes may also have been influenced by industry policies and standards at the central and local government levels. In particular, between May 2023 and July 2023, China’s National Information Security Standardization Technical Committee (also known as “TC260”) published two “wishlists” (here and here), outlining 48 upcoming national recommended standards. Among these standards, three were specifically focused on generative AI, with the aim of shaping the enforcement of the requirements specified in the final version of the Measures.
The next few paragraphs highlight changes to the overall contours of the Measures.
A key change in the final Measures is the allocation of regulatory responsibility for generative AI
A major difference between the draft and final versions of the Measures is in the allocation of administrative responsibility for generative AI. The final version of the Measures allowed for greater collaboration amongst public institutions compared to the draft version, with the CAC playing a less prominent role. The other six agencies involved in issuing the final version of the Measures are the National Development and Reform Commission (NDRC); the Ministry of Education; the Ministry of Science and Technology (MoST); the Ministry of Industry and Information Technology (MIIT); the Ministry of Public Security; and the National Radio and Television Administration.
Notably, the task to promote AI advancement amid escalating concerns is to be overseen by authorities other than the CAC, such as MoST, MIIT, and NDRC.
Another significant difference is the inclusion of three pro-business provisions – namely, Articles 3, 5, and 6 – in the final version of the Measures. These Articles provide as follows:
Article 3: “The state is to adhere to the principle of placing equal emphasis on development and security, merging the promotion of innovation with governance in accordance with law; employing effective measures to encourage innovation and development in generative AI, and carrying out tolerant and cautious graded management by category of generative AI services.” [emphasis added]
Article 5: “Encourage the innovative application of generative AI technology in each industry and field, generate exceptional content that is positive, healthy, and uplifting, and explore the optimization of usage scenarios in building an application ecosystem.
Support industry associations, enterprises, education and research institutions, public cultural bodies, and relevant professional bodies, etc. to coordinate in areas such as innovation in generative AI technology, the establishment of data resources, applications, and risk prevention.” [emphasis added]
Article 6: “Encourage independent innovation in basic technologies for generative AI such as algorithms, frameworks, chips, and supporting software platforms, carry out international exchanges and cooperation in an equal and mutually beneficial way, and participate in the formulation of international rules related to generative AI.
Promote the establishment of generative AI infrastructure and public training data resource platforms. Promote collaboration and sharing of algorithm resources, increasing efficiency in the use of computing resources. Promote the orderly opening of public data by type and grade, expanding high-quality public training data resources. Encourage the adoption of safe and reliable chips, software, tools, computational power, and data resources.” [emphasis added]
These provisions impose fewer obligations on generative AI service providers than those in the draft version of the Measures. They emphasize the balance between development and security in generative AI, the promotion of innovation while ensuring compliance with the law, support for the application of AI across industries to generate positive content, and collaboration among various entities. They also emphasize independent innovation in AI technologies, international cooperation, and the establishment of infrastructure for sharing data resources and algorithms.
These shifts may be attributed to the above-mentioned feedback received on the draft version of the Measures from industry stakeholders and legal experts.
This article now turns to changes in specific provisions in the final Measures and their implications.
1. The Measures see significant changes in respect of their domestic and extraterritorial applicability
The Measures narrow the scope of “public” by excluding certain entities and service providers not providing services in PRC
The Measures apply to organizations that provide generative AI services to “the public in the territory of the People’s Republic of China”. While the Measures do not define “generative AI services”, Article 2 clarifies that the Measures apply to services that use models and related technologies to generate text, images, audio, video, and other content.
The Measures appear to address some concerns raised in the previous article about the ambiguity surrounding the undefined term “public”. For example, one of the questions raised in the previous article (in respect of the draft Measures) was whether a service licensed exclusively to a Chinese private entity for internal use would fall within the scope of the Measures, considering scenarios where a generative AI service might be made available only to certain public institutions or customized for individual customers. The Measures appear to partially address this ambiguity by removing certain entities from the scope of “the public”. Specifically, Article 2 now clarifies that the Measures do not apply to certain entities (industrial organizations, enterprises, educational and scientific research institutions, public cultural institutions, and related specialized agencies) if they research, develop, and use generative AI technologies but do not provide generative AI services to the public in the PRC. Further clarification may be found in an expert opinion published on the CAC’s public WeChat account supporting the internal use of generative AI technologies and the vertical supply of generative AI technologies among these entities.
This change also significantly narrows the scope of the Measures compared with other existing Chinese technology regulations. In comparison, the rules on deep synthesis and recommendation algorithms apply to any service that uses generative AI technologies, regardless of whether these services are used by individuals, enterprises or “the public”.
Future AI regulation in China may not share the Measures’ focus on “the public”. For instance, the recent China AI Model Law Proposal, an initiative of the Chinese Academy of Social Sciences (CASS) and a likely precursor to a more comprehensive AI law, does not appear to have such a limitation on its scope.
The Measures now have extraterritorial effect to address foreign provision of generative AI services to PRC users
The Measures also appear to have been tweaked to apply extraterritorially. Specifically, Article 2 provides that the Measures apply to a generative AI service so long as it is accessible to the public in the PRC, regardless of where the service provider is located.
This change appears to have been prompted by users trying to circumvent the application of the Measures on generative AI service providers based overseas. Specifically, to avoid compliance with Chinese regulators, several foreign generative AI service providers have limited access to their services from users in the PRC, such as by requiring foreign phone numbers for registration or requiring international credit cards during subscription. In practice, however, users have been able to access the services of these foreign generative AI service providers by following online tutorials or purchasing foreign-registered accounts on the “black market“. For example, though ChatGPT does not accept registrations from users in China, ChatGPT logins were available for sale on Taobao shortly after its initial release. Such activity has drawn the attention of the Chinese government, which had to take enforcement action against such platforms even before the Measures were formulated.
In practice, CAC is expected to adopt a “technical enforcement” strategy against foreign generative AI services. Article 20 of the Measures empowers the CAC to take action against foreign service providers that do not comply with relevant Chinese regulations, including the Measures. Under this provision, the CAC may notify relevant agencies to take “technical measures and other necessary actions” to block Chinese users’ access to these services. A similar provision is found in the Article 50 of the Cybersecurity Law, which addresses preventing the spread of illegal information outside of the PRC.
2. The Measures relax providers’ obligations while assigning users with new responsibilities
As elaborated below, the CAC adjusted the balance of obligations between generative AI service providers and users in the final version of the Measures. To recap, Article 22 of the final version of the Measures defines “providers” as companies that offer services using generative AI technologies, including those offered through application programming interfaces (APIs). It also defines “users” as organizations and individuals that use generative AI services to generate content.
The Measures adopt a more relaxed stance on generative AI hallucination
The Measures seek to address hallucinations of generative AI in three ways.
First, the Measures shift focus from outcome-based to conduct-based obligations for providers. Previously, the draft version of the Measures adopted a strict compliance approach, while the final version of the Measures adopted an approach focused on actions taken by generative AI service providers to address hallucinations, a more flexible approach focusing on the duty of conduct. In the draft version of the Measures, Article 7 required providers to ensure the authenticity, accuracy, objectivity and diversity of the data used for pre-training and optimization training. However, the final version of the Measures has softened this stance, expecting providers simply to “take effective measures to improve” the quality of data. This revision recognizes the technical challenges of developing generative AI, including the heavy reliance on data made available on the Internet (which makes ensuring the authenticity, accuracy, objectivity and diversity of the training data practically impossible).
Second, the Measures no longer require generative AI service providers to prevent “illegal content” (which is not defined in Article 14, but is likely to refer to “content that is prohibited by laws and administrative regulations” under Article 4.1) from being re-generated within three months. Instead, Article 14.1 of the Measures merely requires providers to immediately stop the generation of illegal content, cease its transmission, and remove it. The Measures also require generative AI service providers to report the illegal content to the CAC (Article 14).
The Measures relax penalties for generative AI service providers, but mandate other regulatory requirements
The Measures relax penalties for violations, notably removing all references to service termination or fines. Specifically, Article 20.2 of the draft Measures had provided for suspension or termination or generative AI services and the imposition of fines between 10,000 to 100,000 yuan where generative AI service providers refused to cooperate or committed serious violations. However, Article 21 of the Measures merely provides for suspension of services.
The relaxed penalty regime, however, appears to be balanced against the imposition of mandatory security assessment and algorithm filings in certain cases. Article 17 of the Measures requires generative AI service providers providing generative AI services “with public opinion properties or the capacity for social mobilization” to carry out security assessments and file their algorithms based on the requirements set out under the “Provisions on the Management of Algorithmic Recommendations in Internet Information Services” (which regulate algorithmic recommendation systems in, inter alia, social media platforms). This targeted approach thus avoids a blanket requirement for all services to undergo a security assessment based on a presumption of potential influence on the public.
While the practical impact of this added assessment and filing requirement remains unclear, it is notable that by September 4, 2023 (less than a month after the Measures came into force), it was reported that eleven companies had completed algorithmic filings and “received approval” to provide their generative AI services to the public. Given that these filings are usually also tied to a security assessment, his development suggests that the companies had also passed their security assessments. From the report, however, it is unclear whether these companies were required under the Measures to file their generative AI services; some may have voluntarily completed these processes to reduce future compliance risks.
The Measures also adopt narrower, albeit more stringent, inspection requirements. Under Article 19, when subject to “oversight inspections”, generative AI service providers are required to cooperate with the relevant competent authorities and provide details of the source, scale and types of training data, annotation rules and algorithmic mechanisms. They are also required to provide the necessary technical and data support during the inspection. This appears to have been narrowed from its corresponding provision in the draft Measures (specifically, Article 17 of the draft Measures), which also required generative AI service providers to provide details such as “the description of the source, scale, type, quality, etc. of manually annotated data, foundational algorithms and technical systems” on top of those required under Article 19. However, Article 19 introduces greater stringency by explicitly requiring vendors to provide the actual training data and algorithms, as opposed to the draft version under the draft Article 17, which only required descriptions. Article 19 also introduces a section outlining the responsibilities of enforcement authorities and staff in relation to data protection.
The Measures also introduce provisions that impact users of generative AI services
The Measures introduce provisions that impact the balance of obligations between generative AI service providers and their users in three main areas:
1. Use of user input data to profile users: Article 11 contains a notable difference between the final and draft version of the Measures as regards the ability for generative AI service providers to profile users based on their input data. Specifically, while the draft Measures had strictly prohibited providers from profiling users based on their input data and usage patterns, this restriction is noticeably absent in the final Measures. The implication appears to be that generative AI service providers now have greater leeway to utilize users’ data input to profile them.
2. Providers to enter into service agreements with users: The second paragraph of Article 9 requires generative AI service providers to enter “service agreements” with users that clarify their respective rights and obligations. While the introduction of this provision may indicate a stance towards allowing private risk allocation, it is still subject to several limitations. First, this provision should be read in conjunction with the first paragraph of Article 9, which states that providers ultimately “bear responsibility” for producing online content and handling personal information in accordance with the law. Thus, the Measures do not permit providers to fully shift liability to users via service agreements. Second, even when the parties outline their respective rights and obligations, whether they can allocate their rights and obligations fairly and efficiently will depend on various factors, such as the resources available to them and the existence of information asymmetries between parties.
3. Responsibilities of Users: Article 4(1) appears to extend obligations to users to ensure that generative AI services “(u)phold the Core Socialist Values”. This means that users must also refrain from creating or disseminating content that incites subversion, glorifies terrorism, promotes extremism, encourages ethnic discrimination or hatred, and any content that is violent, obscene, pornographic, or contains misleading and harmful information. This provision is significant given that the draft Measures did not initially include the obligations of users.
3. The Measures assign responsibility to generative AI service providers as producers of online information content, although the scope of obligation remains unclear
Under Article 9, the Measures state that generative AI service providers shall bear responsibility as the “producers of online information content (网络信息内容生产者)”. This terminology aligns with the CAC’s 2019 Provisions on the Governance of the Online Information Content Ecosystem (2019 Provisions), in which the CAC outlined an online information content ecosystem consisting of content producers, content service platforms, and service users, each with shared but distinct obligations in relation to content. In its ‘detailed interpretation’ of the 2019 Provisions, the CAC defined content producers as entities (individuals or organizations) that create, reproduce, and publish online content. Service platforms are defined as entities that offer online content dissemination services, while users are individuals who engage with online content services and may express their opinions through posts, replies, messages, or pop-ups.
This allocation of responsibility as online information content producers under the Measures can be contrasted with the position under the draft Measures, which referred to generative AI service providers as “generated content producers (生成内容生产者)”. This designation was legally unclear, as it was a new and undefined term.
However, the legal position following this allocation of responsibility under the Measures is still unclear. Unlike content producers defined under the 2019 Provisions, generative AI service providers have a less direct relationship with the content produced by their generative AI services (given that content generation is not prompted by these service providers, but by their users)
To further complicate matters, Article 9 also imposes “online information security obligations” on generative AI service providers. These obligations are set out in Chapter IV of China’s Cybersecurity Law. This means that the scope of generative AI service providers’ online information security obligations can only be determined by jointly reading the Cybersecurity Law, the Measures, the 2019 Provisions, as well as user agreements between generative AI service providers and their users.
In sum, while there is slightly greater legal clarity on generative AI service providers’ responsibilities as regards content generated by their services, more clarity is needed on the exact scope of these obligations. It may only become clearer when the CAC carries out an investigation under the Measures.
Conclusion: While clearer than before, the precise impact of the Measures will only be fully understood in the context of other regulations and global developments.
Notwithstanding the greater clarity provided in the Measures, their full significance cannot be understood in isolation. Instead, they need to be read closely with existing laws and regulations in China. These include existing regulations introduced by the CAC on recommendation algorithms and deep synthesis services. Nevertheless, the Measures will give the CAC additional regulatory firepower to deal with prominent societal concerns around algorithmic abuses, youth Internet addiction, and issues such as deepfake- related fraud, fake news, and data misuse.
Further, while China’s AI industry contends with the Measures and its implications, they may soon have to contend with another regulation: an overarching comprehensive AI law. In May 2023, China’s State Council discreetly announced plans to draft an AI Law. This was followed by the release of a draft model law by the Chinese Academy of Social Sciences, a state research institute and think tank. Key features of the model law include a balanced approach to development and security through an adjustable ‘negative list,’ the establishment of a National AI Office, adherence to existing technical standards and regulations, and a clearer delineation of responsibilities within the AI value chain. In addition, the proposed rules indicate strong support for innovation through the introduction of preemptive regulatory sandboxes, broad ex post non-enforcement exemptions, and various support measures for AI development, including government-led initiatives to promote AI adoption. In addition, the impact of the Measures will need to be studied alongside international developments, such as the EU AI Act and the UK’s series of AI Safety Summits. Regardless of how these international developments unfold, it is clear that the Measures – and other regulations introduced by the CAC on AI – are helping it build a position of thought leadership globally, as seen from the UK’s invitation to China to its inaugural AI Safety Summit. As governments around the world rush to comprehend rapid generative AI developments, China has certainly left an impression for being the first jurisdiction globally to introduce hard regulations on generative AI.
Two New Apple and Google Platform Privacy Requirements Kicking In Now
Apple’s important mandatory requirements affecting iOS apps are about to kick in, and Google’s new requirements for publishers and advertisers have just gone into effect. Accurately implementing these requirements calls for close cooperation between the legal, privacy, and ad ops teams.
Apple’s Privacy Manifests
At WWDC 2023, Apple announced privacy manifests, signatures for SDKs, and required reason APIs. In early 2024, Apple began requiring a privacy manifest for every new or updated app and every third-party Software Development Kit (SDK) in the Apple App Store. The privacy manifest must include four pieces of information:
The type of data collected by the app or SDK.
How the data collected will be used by the app or the SDK.
Whether the data are linked to the user.
Whether the data are used for tracking, as defined by Apple.
What are Privacy Manifests, and what benefits do they provide?
Privacy Manifests are an important tool for third-party SDK developers and app developers to communicate critical information about their privacy practices with app developers and Apple. Privacy manifests describe in detail their use of data and select system APIs, called “required reason APIs,” which may require collaboration with legal teams to ensure accurate reporting. Data categories include Contact Information, Health and Fitness, Financial Information, Location, Search History, User Content, Purchases, and a category for Other Data Types not covered in one of the defined categories. The data collected in each category should be assigned a defined purpose in the property file. Example purposes include: App Functionality, Analytics, and Third-party Advertising. A defined “other purposes” category exists as a catch-all.
Privacy Manifests provide several benefits once defined. First, they build on App Tracking Transparency (ATT) in that any network requests to any of the tracking domains made when the user has chosen not to be tracked will automatically fail. Building this into the platform ensures that apps or SDKs cannot accidentally violate user consent because it will actually be impossible for the app to complete the network request. App developers who are unaware of the tracking third-party SDKs do may no longer have to worry and can simply state which tracking domains they know they need to use.
Second, privacy manifests allow developers and Apple to know why third-party SDKs and apps are using select system APIs. This is possible because every developer must specify their reason for needing to use these system APIs. Functionally, this reason is specified in a similar manner to data categorization and use described above. Instead of defined data categories and purposes, developers must select a defined reason for using any of the APIs defined in the developer documentation of the privacy manifest feature. These requirements will start being enforced on May 1st.
The goal of the “required reason” API feature may be intended to prevent software fingerprinting, which is a type of tracking that uses differences in preferences, settings, and hardware capabilities to uniquely identify users. Consider the use of an API that returns information on how much space is left on the file system. This could be done to ensure the space available is enough for a large network transfer, but it could also be done as a data point to uniquely identify a device. The former is an acceptable reason that can be specified as such in a privacy manifest, whereas the latter may raise privacy implications or violate platform guidelines.
Third, organizations implementing privacy manifests can generate a Privacy Report by automatically combining the application’s privacy manifest with all of the privacy manifests of the third-party SDKs used by that app. The report is a PDF that describes data and API uses broken down by category (e.g., contact information, health and fitness, etc). It does not replace Apple’s Privacy Nutrition Labels in the App Store, but can be used by organizations as a reference when making those assessments.
Finally, Apple has defined and will maintain a list of third-party SDKs that require a privacy manifest and an application signature. Developers have had to be extremely cautious in adopting new SDKs because they are responsible for all the code in their app as well as the code in third-party SDKs included in their app. The goal of combining privacy manifests with an application signature is to improve the privacy and security of the software supply chain by helping developers determine when data practices have changed and respond appropriately to those changes. For example, developers may choose to update their Privacy Nutrition Label or replace a third-party SDK that no longer has acceptable data practices.
How should developers prepare for this update?
App developers who want to remain in the App Store must prepare a Privacy Manifest. Some aspects of the privacy manifest will be quite straightforward, like uses of data and APIs that are part of the software’s core functionality and clearly fit into the defined categories. Other aspects may not be immediately obvious. Therefore, developers should be proactive in reaching out to the appropriate people within their organization to ensure they provide the most accurate categorization possible. The goal is clear: the privacy manifest should be a comprehensive report on all data used by the application, but it is not prose text, just a categorization of data collection and usage rationale based on the available defined categories and purposes available in the Privacy Manifest specification.
Google’s Consent Mode v2
Google began enforcing changes to its advertising platforms in Europe starting March 2024. These changes require publishers to update to Consent Mode version 2 in either a basic or an advanced configuration.
A brief history and description of Consent Mode and Consent Mode v.2
Consent Mode was released in 2020 as part of Google Tag Manager, a tool available to publishers using Google Advertising services that provides publishers with an optional set of controls for advertising and analytics tags. Consent Mode helps publishers to communicate user consent status to Google such that it can guide future interactions with any person, such as tracking or advertising. Consent Mode works with Consent Management Platforms (CMPs) to provide more options to publishers seeking to comply with European data protection regulations in their advertising technology stack, including advertising and analytics tags for both Google and third parties. Google Ads also supports the IAB’s Transparency and Consent Framework (TCF), and recommends implementing either TCF or Consent Mode to communicate consent, but not both. If both are implemented, Google respects the most conservative setting communicated, and their recommendation to implement only one of these two options is driven primarily by performance considerations.
In late 2023, Google released Consent Mode version 2, an update that was designed to provide more nuance in recording an individual’s preferences as well as in reaction to legal updates in Europe. Specifically, Consent Mode version 2 introduces two new parameters: ad_user_data, which captures consent for personalized advertising, and ad_personalization, which captures consent for remarketing. These parameters do not have an impact on how tags operate on the publisher site and only communicate how user data can be used for advertising to Google.
By way of comparison, the parameters from Consent Mode version 1 are ad_storage, which enables the storage of identifiers for advertising on both web and mobile platforms, and analytics_storage, which enables the storage of identifiers for analytics on both web and mobile platforms. So, one way to think about these changes is to think of the tags from Consent Mode version 1 as qualifiers for which identifiers can be stored and the tags from Consent Mode version 2 as instructions for Google on how to process the data collected.
With the new parameters introduced in Consent Mode version 2, Google also introduced two new configurations: a Basic configuration that prevents any loading of Google’s tags without user consent, and an Advanced configuration that loads Google’s tags prior to user consent but only sends a cookieless ping until user consent is obtained. The Advanced configuration can be customized for each advertiser tag. Sites based on Consent Mode and seeking to ensure that tags are always available to collect information with consent must implement either Basic or Advanced Consent Mode version 2 configuration.
What should publishers using Google advertising services do to comply in response?
First, publishers hosting a site with users in the European Economic Area (EEA) should, at an absolute minimum, implement Consent Mode version 2 in its Basic configuration.
If you have done nothing else, a Basic configuration of Consent Mode is a relatively quick way to ensure that you are not collecting data without user consent.
Second, publishers can create an Advanced configuration with their advertising and marketing team. Advanced configurations are capable of more nuanced privacy controls that may more efficiently achieve advertising goals. This approach can include AI modeling, templates for different consent management platforms, and per-advertiser configuration of tags. The details of a custom configuration are outside the scope of this post, but an Advanced configuration may prove to be the best option available for many publishers.
Summary
European data protection requirements and related DPA enforcement and court decisions continue to shape the technology and policy interactions between different stakeholders in the ad tech ecosystem. Obligations that large platforms have under DSA, DMA, and other EU digital strategy developments will continue to drive new platform obligations. Google began enforcing Consent Mode v2 in March, and Apple will start fully enforcing their privacy manifest requirements on May 1st. Both of these features will be implemented by developers, but both of them have legal implications that likely require detailed privacy review.
FPF Submits Comments to the Office of Management and Budget on AI and Privacy Impact Assessments
On April 1, 2024, the Future of Privacy Forum filed comments to the Office of Management and Budget (OMB) in response to the agency’s Request for Information on how privacy impact assessments (PIAs) may mitigate privacy risks exacerbated by AI and other advances in technology. The OMB issued the RFI pursuant to the White House’s Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
As privacy impact assessments are a well-established means for both public and private entities to assess privacy risks in their services, products, and programs, there is a tremendous opportunity for federal agencies to apply learnings from existing data privacy to the challenges that AI presents as a rapidly evolving technology.
In our submission, FPF provides several recommendations to the OMB, including:
1. Clearly defining the scope of PIAs for AI to explicitly encompass considerations of all risks posed by the processing of personal data, including algorithmic discrimination;
2. Recognizing that risks addressed in a PIA, including discrimination risks, should be complementary to, and neither a replacement nor a repetition of, a comprehensive AI risk assessment or other AI-related assessment; and
3. Ensuring that the scope and substance of a PIA for AI tools account for role-specific responsibilities and capabilities in the AI system lifecycle.
Given that AI can create risks for individuals, communities, and societies, it is imperative to ensure that organizations perform a risk analysis on their use of AI tools, especially when such tools are used to make consequential decisions.
“Whether conducted by the public sector, private companies, or other entities, privacy impact assessments can play an important role in evaluating and mitigating certain risks associated with technology. As the federal government now looks to determine the usefulness of privacy impact assessments for responsible AI governance and development, FPF looks forward to continuing to provide insights to policymakers and companies alike as they grapple with the unique privacy challenges associated with the use of AI tools and other emerging technologies.”
– Anne J. Flanagan, FPF Vice President for Artificial Intelligence