Synthetic Content: Exploring the Risks, Technical Approaches, and Regulatory Responses
Today, the Future of Privacy Forum (FPF) released a new report, Synthetic Content: Exploring the Risks, Technical Approaches, and Regulatory Responses, which analyzes the various approaches being pursued to address the risks associated with “synthetic” content – material produced by generative artificial intelligence (AI) tools. As more people use generative AI to create synthetic content, civil society, media, and lawmakers are paying greater attention to some of the risks—such as disinformation, fraud, and abuse. Legislation to address these risks has focused primarily on disclosing the use of generative AI, increasing transparency around generative AI systems and content, and placing limitations on certain synthetic content. However, while these approaches may address some challenges with synthetic content, each one is individually limited in its reach and implicates a number of tradeoffs that policymakers should address going forward.
This report highlights the following themes:
- Synthetic content can raise a number of risks, including risks related to political disinformation and misinformation, fraud, and non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM).
- Policymakers and others are exploring various technical, organizational, and legal approaches to addressing synthetic content’s risks, such as requiring authentication techniques and placing limitations on certain uses of synthetic content.
- Current approaches to regulating synthetic content may face a number of limitations and tradeoffs, including with privacy and security, and policymakers should evaluate the potential implications of these approaches.
This report is based on an extensive survey of existing technical and policy literature, recently-proposed and/or enacted legislation, and emerging regulatory guidance and rulemaking. The appendix provides further details about the current major legislative and regulatory frameworks being proposed in the U.S. regarding synthetic content.
This report is part of a larger, ongoing FPF effort to monitor and analyze emerging trends in synthetic content, including its potential risks, technical developments, and relevant legislation and regulation. For previous FPF work on this issue, check out the following:
- Comment to the Federal Communications Commission (FCC) on disclosure and transparency of AI-generated content in political advertisements.
- Comment to the National Institute of Standards & Technology (NIST) in response to NIST AI 100-4, “Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency.”
- Comment to the Federal Trade Commission (FTC) on AI-driven impersonation.
- Comment to the Federal Election Commission (FEC) on “fraudulent misrepresentation” in AI-generated political campaign ads.
- One-pager analyzing California’s new AI Transparency Act (SB 942), which requires certain disclosures for AI-generated content (FPF members only).
- Briefing analyzing general current legislative approaches to synthetic content (FPF members only).
If you would like to speak with us about this work, or about synthetic content more generally, please reach out to Jameson Spivack ([email protected]).