Evolving Landscape of Deepfake Laws

Navigating the Evolving Landscape of Deepfake Laws: A Guide for Online Platforms and Businesses

The rapid advancement of Artificial Intelligence (AI) technology has led to the proliferation of sophisticated synthetic media, commonly known as "deepfakes". These hyper-realistic manipulated images, audio, and videos pose significant threats, including the spread of misinformation, identity theft, fraud, and the creation of non-consensual explicit content. As AI continues to outpace existing regulations, lawmakers globally are increasingly focusing on deepfakes, leading to a complex and evolving legal landscape that online platforms and internet businesses must navigate.

Understanding these legal developments is crucial to avoid costly penalties, reputational damage, and to contribute to a safer online environment. This blog post aims to outline the current state of deepfake laws and provide practical steps for platforms to prepare.

Why Deepfake Laws Matter for Your Platform

Deepfakes can be used maliciously on online platforms in numerous ways:
  • Spreading Disinformation: Manipulated videos of public figures can sow distrust and influence public opinion, especially during elections.
  • Identity Fraud and Scams: AI-generated voices or images can be used to bypass identity verification systems or facilitate scams.
  • Non-Consensual Explicit Content: A significant majority of deepfakes on the internet are non-consensual explicit images or videos, disproportionately targeting women and minors, causing immense trauma and harm.
  • Reputational Damage and Harassment: Deepfakes can be used to defame or harass individuals or entities.
The Evolving Global Legal Landscape

Platforms that host such content may face legal liability, public backlash, and erosion of user trust. New laws are increasingly imposing obligations directly on platforms regarding the content they host and distribute.

While a comprehensive international framework is still developing, various jurisdictions are implementing or proposing laws to address deepfakes.

United States: The U.S. legal landscape is a mix of state-level laws and emerging federal regulation.
  • State Laws: Many individual states have enacted laws, though they vary in scope and definition. State laws often target specific harmful uses, such as prohibiting non-consensual sexual deepfakes or limiting the use of deepfakes in political campaigns. Some states are amending existing laws (like revenge porn or child sexual abuse material statutes) to include AI-generated content. Examples include California, Texas, Virginia, New York, and Tennessee. Tennessee's ELVIS Act, for instance, specifically protects a person's voice, including simulations, in the context of commercial exploitation.
  • Federal Developments: There is a growing push for federal action.
    1. The TAKE IT DOWN Act, signed into law in May 2025, is a significant federal development. It criminalizes the knowing publication or threat to publish non-consensual intimate imagery, explicitly including AI-generated deepfakes depicting identifiable real people. The Act also requires covered online platforms to establish a process for victims to notify them and request removal of such content within 48 hours of receiving notice. The Federal Trade Commission (FTC) is tasked with enforcing compliance with the takedown requirements.
    2. Proposed legislation, such as the NO FAKES Act, aims to create a federal right of publicity for digital replicas, protecting individuals' voice and likeness against unauthorized use with statutory damages and takedown procedures. The DEFIANCE Act is another proposed bill that would allow victims to sue over faked pornographic images. The U.S. Copyright Office has also recommended new federal legislation specifically to govern deepfakes due to the insufficiency of existing laws.
European Union: The EU has taken a more comprehensive approach with the AI Act, the world's first broad AI law, which became effective in August 2024, although full enforcement is set for August 2026.
  • The AI Act regulates AI systems based on their level of risk.
  • It imposes transparency obligations on providers and deployers of certain AI systems, including those that generate content.
  • Content generated or modified with AI, including deepfakes, must be clearly labelled so users are aware. This may involve watermarks or other technical markers.
  • Non-compliance can result in significant financial penalties, up to 35 million euros or 7 percent of global annual turnover.
  • Some EU countries are also implementing their own specific regulations, such as Spain, which recently approved a bill mandating labeling and imposing severe penalties for non-compliance.
Other Countries:
  • China has implemented strict regulations requiring clear labeling of synthetic content and prohibiting deepfakes that threaten national security or social stability.
  • India currently relies on existing laws like the Information Technology Act and the Indian Penal Code to address malicious deepfake use (defamation, harassment, national security) and is actively exploring dedicated regulations.
Challenges in Regulation and Detection

Regulating deepfakes is complex due to the rapid evolution of the technology, the difficulty in tracing origins (especially cross-border), and the need to balance legitimate uses and freedom of expression with preventing harm. Law enforcement and the industry currently struggle to reliably detect deepfakes. This has created a "technological arms race" between those creating deepfakes and those developing detection methods.

Steps for Platforms and Internet Businesses to Prepare

Given the accelerating pace of legislative and technological developments, online platforms and businesses should take proactive steps to prepare:
  1. Understand Jurisdictional Differences: Identify where your users and customers are located and research the specific deepfake and AI laws in those regions. Laws vary significantly by state in the U.S. and across countries and the EU.
  2. Implement Transparency and Disclosure Mechanisms:
    • Clearly label AI-generated or manipulated content as required by laws like the EU AI Act and proposed U.S. legislation (DEEP FAKES Accountability Acts).
    • For content containing a moving visual element, ensure technical capabilities like embedded digital watermarks or content provenance technologies are in place to identify altered content. Note that the 2023 DEEPFAKES Accountability Act mentions "content provenance technologies" as an example, and the Attorney General is expected to issue technical specifications for these.
    • If using deepfakes or synthetic media in marketing or other content involving individuals, obtain explicit consent from those depicted. Proposed U.S. laws like the NO FAKES Act specifically address consent for digital replicas.
  3. Develop Robust Content Policies: Create and enforce clear internal policies regarding the acceptable and prohibited uses of deepfake technology on your platform. These policies should define what constitutes malicious use and align with legal requirements.
  4. Integrate Detection Capabilities: Invest in and integrate technology to detect deepfakes and other advanced image manipulation methods within the content distributed on your platform. While challenging, robust detection technology is crucial for identifying malicious content and complying with potential requirements. Inference-based detection, which analyzes content directly, can complement provenance solutions like watermarking. The FTC may require specialized software for enforcement of laws like the Take It Down Act.
  5. Establish Clear Notice and Removal Procedures: Create an accessible process for individuals to notify your platform about non-consensual deepfakes or other prohibited content depicting them and to request its removal. The TAKE IT DOWN Act mandates such a process for covered platforms regarding non-consensual intimate imagery, requiring removal within 48 hours of valid notice.
  6. Provide User Education and Awareness: Educate your users about the risks of deepfakes, how to potentially identify them, and how to report problematic content on your platform. Encourage users to question and verify digital content.
  7. Stay Updated and Engage: Continuously monitor legislative changes and emerging technical standards globally. Engage with industry groups, legal experts, and potentially government agencies to stay informed and contribute to best practices.
  8. By proactively implementing these steps, platforms can better navigate the complex legal landscape, protect their users, and address the growing challenges posed by deepfake technology.

    Author: Ami Kumar, Trust & Safety Thought Leader at Contrails.ai

    He is a Trust & Safety thought leader specializing in gaming at Contrails.ai. He translates complex online protection challenges into strategic advantages for digital platforms. Drawing from extensive experience in online gaming safety, He develops comprehensive, AI-powered frameworks that ensure robust user protection while preserving positive player experiences.

    He champions proactive approaches, building scalable moderation strategies that seamlessly balance automation with human insight. His work spans developing adaptive governance models, fostering cross-functional safety programs, and measuring outcomes to demonstrate both user safety and business value. He actively contributes to industry best practices, believing in collaborative efforts for effective online protection. Connect with him to discuss the strategic value of Trust & Safety in building user trust and sustainable gaming communities.
Copyright © 2025 Social Media Matters. All Rights Reserved.