The rapid advancements in artificial intelligence (AI), particularly in generative AI, have brought about incredibly sophisticated “deepfakes” – synthetic media that can convincingly depict individuals saying or doing things they never did. While some deepfakes are harmless, even humorous, their potential for misuse in areas like misinformation, harassment, and identity theft has prompted a wave of legislative action. California, ever at the forefront of tech regulation, has been particularly proactive in addressing this complex issue.
California’s legal framework surrounding deepfakes is a patchwork of existing laws and newly enacted legislation, designed to tackle the various harms deepfakes can inflict. Understanding these evolving laws is crucial for individuals, content creators, and particularly, online platforms operating in the Golden State.
Key Areas of Regulation:
California’s deepfake laws broadly fall into a few critical categories:
Non-Consensual Sexually Explicit Deepfakes: This is arguably the most urgent and direct application of deepfake regulation.
Existing Law (AB 602, 2019): California was an early mover, creating a civil cause of action against individuals who create or distribute non-consensual sexually explicit deepfakes. This allows victims to sue for damages, including emotional distress, and seek injunctions to have the content removed.
New Laws (Effective January 1, 2025):
SB 926: Criminalizes the creation and distribution of non-consensual deepfake pornography. It specifically prohibits distributing realistic deepfake intimate images without consent if the distributor knew or should have known it would cause serious emotional distress. Violations can lead to civil penalties, fines, or criminal charges.
AB 1831: Expands the scope of existing child pornography laws to explicitly include content that is digitally altered or generated by AI systems, closing potential loopholes.
SB 981: Mandates that social media platforms in California establish reporting tools for users to report cases of sexually explicit digital identity theft. Platforms must temporarily hide reported content, confirm receipt within 48 hours, and assess the report within 30 days (extendable to 60). While it doesn’t specify penalties for non-compliance, failure to meet these requirements could lead to legal challenges.
Election-Related Deepfakes: The potential for deepfakes to sow disinformation and influence elections is a major concern.
Existing Law (AB 730, 2019): Prohibited the distribution of materially deceptive audio or visual media of a political candidate within 60 days of an election, with the intent to injure the candidate’s reputation or deceive voters. It included exceptions for satire and parody, provided there was a clear disclosure.
New Laws (Effective January 1, 2025, and some immediately):
AB 2355: Requires political advertisements using AI-generated or substantially altered content to include a clear and conspicuous disclosure that the material has been altered using AI.
AB 2655 (“Defending Democracy from Deepfake Deception Act of 2024”): Requires large online platforms (over one million California users) to block or label “materially deceptive” election-related content, particularly deepfakes that could harm a candidate’s reputation or election chances. It mandates rapid removal of flagged content (within 72 hours) and labeling tools. It also empowers candidates, officials, and the Attorney General to seek injunctive relief and damages. Exemptions apply for satire, parody, and news.
AB 2839: Expands the prohibition on distributing deceptive content about a candidate to a longer period (120 days before and, in some cases, 60 days after an election) and broadens the types of “materially deceptive content” covered.
Protection of Likeness and Voice: Beyond elections and explicit content, California is also addressing the unauthorized use of an individual’s digital replica.
AB 2602 (Effective January 1, 2025): Protects individuals from unauthorized use of their digital replicas in personal or professional service contracts. It makes provisions unenforceable if they replace live performances, lack specific usage descriptions, and if the individual wasn’t represented by legal counsel or a union.
AB 1836 (Effective January 1, 2025): Restricts the use of digital replicas of deceased personalities for commercial purposes without consent from their estate, providing protections retroactively. Violators can be liable for significant damages.
Legal Issues and Challenges:
While these laws aim to provide much-needed protection, they also grapple with significant legal challenges, primarily concerning the First Amendment’s guarantee of free speech.
Vagueness and Overbreadth: Critics argue that some definitions, like “materially deceptive content,” could be overly broad and chill legitimate forms of speech, including satire, parody, or political commentary.
Compelled Speech: Requirements for disclosures or labels can be seen as “compelled speech,” where the government forces individuals or platforms to convey a specific message. Courts often scrutinize such requirements to ensure they are narrowly tailored and serve a compelling government interest. Indeed, a federal judge has already blocked a key part of AB 2839 regarding font size requirements for disclaimers, citing First Amendment concerns.
Platform Liability: The laws place new obligations on social media platforms, raising questions about their role as intermediaries and their ability to accurately identify and remove deepfakes at scale while avoiding censorship.
Takeaways:
California is actively shaping the legal landscape for deepfakes, attempting to strike a balance between safeguarding individuals and the democratic process, and upholding fundamental free speech rights. As AI technology continues to evolve, so too will the legal responses. Individuals and businesses in California must stay abreast of these developments to avoid potential civil and criminal liabilities and to ensure responsible and ethical engagement with synthetic media. The debate around deepfakes highlights the urgent need for clear, effective, and constitutionally sound regulation in the age of AI.