The sheer frequency of deepfake attacks is a cause for serious concern. In 2024, a deepfake attack occurred every five minutes. This relentless pace underscores the scale at which this threat is proliferating. As generative AI (GenAI) tools become more readily available and sophisticated, the barrier to entry for creating convincing deepfakes is plummeting. This accessibility is further amplified by fraud-as-a-service (FaaS) platforms, where criminals share knowledge, “best practices”, and even the AI tools needed to create these deceptive forgeries.
The sophistication of deepfakes is rapidly outpacing our ability to detect them. Basic fraud tactics like phishing are giving way to hyper-realistic AI-generated videos and synthetic identities. Face-swap apps and GenAI tools allow fraudsters to perform and scale increasingly believable biometric fraud attacks, making it harder even for trained experts to distinguish real from fake.
The applications of deepfakes in fraudulent activities are diverse and terrifying:

- Identity Fraud: Deepfakes are being used to bypass identity verification during onboarding processes, enabling fraudulent account openings. They are also employed to dupe biometric checks for account takeovers, granting unauthorized access to existing accounts. The FinCEN has observed an increase in the suspected use of deepfake media, particularly fraudulent identity documents, to circumvent verification methods.
- Financial Scams: We are seeing a surge in business email compromise (BEC) attacks leveraging deepfakes, where AI-generated video and audio are used to impersonate company executives, leading to massive financial losses. Romance scams are becoming more insidious with the use of AI chatbots that can converse fluently and convincingly with victims. Pig butchering schemes are also evolving, with criminal syndicates using deepfakes for video calls, voice clones, and chatbots to scale up their operations dramatically.
- Extortion: Deepfake extortion scams are targeting high-profile executives, threatening to release compromising AI-generated videos unless ransoms are paid. Misinformation and Political Influence: Deepfakes are not just about financial gain; they can also be used to spread misinformation and even attempt to influence election outcomes by impersonating public or political figures.
- "Digital Arrest" Scams: A disturbing trend emerging in India involves scammers posing as law enforcement officials, using deepfake video and audio of government officials to manipulate victims into transferring their life savings. Experts warn this could soon reach Western nations.
What can be done? The sources suggest a multi-pronged approach is necessary:

- Enhanced Verification: Strong digital identity verification processes are vital, especially during onboarding. Organisations need to adopt multi-layered security strategies.
- AI-Powered Detection: Fighting AI with AI is becoming increasingly effective. AI-driven anti-spoofing models are being developed to specifically catch deepfakes. Leveraging AI and machine learning for threat detection and response is crucial.
- Zero Trust Strategies: Adopting a Zero Trust approach with its "Never Trust, Always Verify" principle, along with identity-centric solutions like biometric verification and multi-factor authentication (MFA), is essential.
- Public Awareness and Education: Educating consumers about the risks of deepfakes and how to identify potential scams is critical. Frequent communication and warnings are necessary.
- Vigilance and Skepticism: Individuals must be wary of unexpected requests, especially those creating urgency. Verifying requests through independent channels is vital. Asking personal questions or establishing "safe words" with family members can help verify identities. In video calls, inconsistencies like blurred backgrounds or out-of-sync speech should be treated as red flags.
- Regulation: The regulatory landscape surrounding AI, including deepfakes, is complex and evolving. While some regions are exploring transparency obligations, finding the right balance between protection and innovation remains a challenge.
Author: Ami Kumar, Trust & Safety Thought Leader at Contrails.ai
Ami Kumar brings over a decade of specialized expertise to the intersection of child safety and AI education, making them uniquely qualified to address the critical components of AI literacy outlined in "Building Digital Resilience." As a Trust & Safety thought leader at Contrails.ai, Ami specializes in developing educational frameworks that translate complex AI concepts into age-appropriate learning experiences for children and families.
Drawing from extensive experience in digital parenting and online gaming safety, Ami has pioneered comprehensive AI literacy programs that balance protection with empowerment—an approach evident throughout the blog's emphasis on building critical thinking skills alongside technical understanding. Their work with schools, educational platforms, and safety-focused organizations has directly informed the practical, field-tested strategies presented in the article.
Ami's advocacy for proactive approaches to online safety aligns perfectly with the blog's focus on preparing children for an AI-integrated future rather than simply reacting to emerging risks. Their expertise includes:
- Developing adaptive educational frameworks that evolve with rapidly changing AI technologies
- Creating age-appropriate learning experiences that balance engagement with critical awareness
- Building cross-functional programs that connect educators, parents, and technology developers
- Measuring educational outcomes to demonstrate both safety improvements and digital confidence
Connect with Ami to discuss implementing effective AI literacy programs that prepare young people to navigate artificial intelligence with confidence, creativity, and critical awareness.