Sakshi Agarwal*
ABSTRACT
Artificial intelligence has given rise to a category of synthetic media commonly known as “deepfakes” that can replicate a person’s face, voice, and mannerisms with photorealistic accuracy. What began as a novelty in academic computer science has become a formidable instrument of fraud, harassment, electoral manipulation, and the sexual exploitation of children. This article traces the arc of deepfake criminality from its earliest documented harms through the present legislative and prosecutorial responses. Drawing on federal statutes, landmark state laws, recent criminal prosecutions, and international regulatory models, it argues that while lawmakers have moved with unusual speed, critical gaps in mens rea standards, platform accountability, and cross-border jurisdiction remain. The article further contends that the first conviction under the TAKE IT DOWN Act in April 2026 marks not an endpoint but an opening salvo in a long legal reckoning with synthetic reality.
INTRODUCTION
In January 2024, a finance worker at the Hong Kong branch of a global engineering firm sat down for what he believed was a routine video conference with his company’s UK-based Chief Financial Officer and several senior colleagues. He saw their faces. He heard their voices. He watched them move. Everything appeared exactly as he had come to expect. By the end of the call, he had authorised fifteen separate wire transfers totalling approximately $25.6 million, depositing the funds into five bank accounts in Hong Kong. Every single participant on that call was a fabrication, an AI-generated deepfake constructed from publicly available photographs and audio recordings of real company executives.
The Arup incident, as it has come to be known, was not an aberration. It was a proof-of-concept, and the criminal world took note. By the first quarter of 2025, documented financial losses from deepfake-enabled fraud had exceeded $200 million in the United States alone, and the FBI’s Internet Crime Report for 2025 attributed $893 million in losses to AI-related scams, up from significantly lower figures in prior years.[1] These numbers almost certainly represent a fraction of actual harm, as many victims, particularly corporations, decline to report incidents for fear of reputational damage.
The legal system, historically slow to metabolise technological change, has responded to the deepfake crisis with unexpected urgency. Between 2022 and the spring of 2026, 170 laws targeting synthetic media have been enacted across American states. Congress passed the TAKE IT DOWN Act in near-unanimous fashion in April 2025, and the President signed it into law on May 19, 2025, marking the first major federal statute explicitly targeting AI-generated intimate imagery. The first conviction under that statute was handed down in April 2026. Internationally, the European Union’s AI Act, the United Kingdom’s amendments to the Online Safety Act, and Japan’s and China’s domestic regulations have added further layers to an evolving global patchwork.
Yet speed has not equalled comprehensiveness. Criminal law, built on the presumption of an identifiable human actor, a discrete act, and a discernible victim, was not designed for a world in which a machine can replicate any person’s appearance and voice, instantly, at scale, and at negligible cost. This article examines where the law stands, where it falls short, and what is at stake if legislators and courts fail to close the remaining gaps.
What Deepfakes Are: Technical Foundations and Criminal Utility
A. The Technology
The term “deepfake” derives from the merging of “deep learning”, a subfield of machine learning, and “fake.” The technology relies on a class of neural networks called Generative Adversarial Networks (GANs), in which two competing algorithms, a generator and a discriminator, iteratively refine synthetic outputs until they become indistinguishable from authentic material. More recent architectures, including diffusion models, have further improved fidelity and reduced the computational cost of generating realistic synthetic faces, voices, and video.
The practical consequence is that today’s voice-cloning software requires as little as twenty seconds of audio to produce a realistic imitation of a person’s speech. Face-swapping technology capable of generating a convincing deepfake video can be executed in under an hour using freely available programs downloadable from the internet.[2] What once required Hollywood-grade resources and weeks of production time can now be accomplished by a motivated amateur with a consumer laptop.
B. The Spectrum of Criminal Applications
Criminal exploitation of deepfake technology falls into several broad, yet often overlapping, categories, each implicating distinct bodies of criminal law.
First, financial fraud. The Arup case exemplifies the most financially devastating application: the use of deepfake videos and cloned voices to impersonate executives and authorise fraudulent wire transfers. The pattern often called “Business Email Compromise 2.0” or “deepfake CEO fraud” has spread rapidly. In March 2025, a finance director at a multinational firm in Singapore authorised a $499,000 transfer after what appeared to be a legitimate Zoom call with fabricated senior leadership. In early 2024, a UK energy firm lost €220,000 after an employee received a phone call from a voice clone that sounded exactly like the company’s CEO.
Second, non-consensual intimate imagery (NCII). The weaponisation of deepfake technology to create fabricated sexually explicit content depicting real individuals, most commonly women, has emerged as one of the most prolific and psychologically devastating applications of the technology. A 2024 report by the Internet Watch Foundation identified over 3,500 new AI-generated criminal child sexual abuse images uploaded to a single dark web forum within a study period.[3] The National Centre for Missing and Exploited Children reported receiving more than 1.5 million tips related to generative AI and child sexual exploitation in 2025 alone.
Third, electoral manipulation. The threat to democratic institutions materialised dramatically in January 2024, when New Hampshire voters received robocalls featuring an AI-generated clone of President Biden’s voice urging them not to vote in the Democratic primary. The calls reached an estimated tens of thousands of voters the day before the election. This incident represented, in the words of New Hampshire Attorney General John Formella, “a real-life example of an attempt to use AI to interfere with an election.”
Fourth, scams targeting private individuals, particularly the elderly. Voice cloning has enabled a new generation of so-called “grandparent scams,” in which criminals clone the voice of a victim’s grandchild or child and deploy it in fabricated emergency scenarios demanding immediate cash transfers. In July 2025, Sharon Brightwell of Dover, Florida, received a call from what sounded unmistakably like her daughter crying, claiming to have been in a car accident, begging for $15,000 to avoid criminal charges. Brightwell wired the money before discovering the voice was an AI construct. “I know my daughter’s cry,” she later told reporters. “There is nobody who could convince me that it wasn’t her.”
The Pre-Existing Legal Landscape: Adapting Old Tools to New Crimes
A. Wire Fraud and the Computer Fraud and Abuse Act
Before the enactment of deepfake-specific statutes, prosecutors confronting AI-enabled fraud were forced to rely on general-purpose instruments. The federal wire fraud statute, 18 U.S.C. § 1343, prohibits the use of wire communications in furtherance of any scheme or artifice to defraud, and its broad drafting has made it a natural first resort for deepfake-based financial crimes[4]. Similarly, 18 U.S.C. § 1030, the Computer Fraud and Abuse Act, has been invoked in cases involving the deployment of deepfake content through unauthorised computer access or system intrusion. Both statutes, however, share a significant limitation: they require proof of the core elements of fraud, a scheme, a misrepresentation, materiality, and specific intent to defraud, which can be difficult to establish when AI systems generate the deception autonomously or when the perpetrator acts through intermediary tools and anonymous accounts.
B. Identity Theft and Impersonation Statutes
The federal aggravated identity theft statute, 18 U.S.C. § 1028A, imposes a mandatory two-year consecutive sentence for identity theft committed in connection with a predicate felony. State impersonation statutes have similarly been stretched to accommodate deepfake scenarios, though their drafting varies significantly, affecting their applicability. The charges filed against Steve Kramer in New Hampshire, 13 counts of felony voter suppression and 13 counts of misdemeanour impersonation of a candidate, were among the first prosecutions in which existing identity-related statutes were deployed against the perpetrator of a deepfake political interference scheme.[5]
C. Harassment, Stalking, and Cyberstalking
Federal cyberstalking law, 18 U.S.C. § 2261A, prohibits the use of electronic communications to engage in a course of conduct that causes substantial emotional distress, and state harassment statutes provide parallel avenues for prosecution in cases where deepfake intimate imagery is deployed to humiliate, coerce, or silence victims. However, the traditional structure of harassment law, which focuses on a course of repeated conduct toward a specific person, can struggle to accommodate scenarios in which a single deepfake image is created and widely disseminated, or in which the creator and distributor are different individuals, potentially in different jurisdictions.
The Legislative Revolution: 2023–2026
A. The TAKE IT DOWN Act (Federal, 2025)
The TAKE IT DOWN Act, formally titled the Tools to Address Known Exploitation by Immobilising Technological Deepfakes on Websites and Networks Act, was signed into law on May 19, 2025, following near-unanimous passage in both chambers of Congress (409–2 in the House; unanimous in the Senate).[6] The statute represents the most significant federal legislative response to AI-generated synthetic media to date.
The Act operates on two distinct tracks. First, it establishes criminal liability; any person who knowingly publishes or threatens to publish non-consensual intimate imagery, whether authentic or AI-generated, through an interactive computer service commits a federal crime. Penalties range up to two years’ imprisonment for adult victims and up to three years’ imprisonment for minor victims. The statute’s definition of a “digital forgery” captures imagery that “appears indistinguishable from genuine” to a reasonable observer, a standard that acknowledges both the current state of deepfake technology and its likely future trajectory.
Second, and perhaps equally consequentially, the Act imposes affirmative obligations on platforms. Covered services must implement a notice-and-takedown mechanism by May 19, 2026. Upon receiving a valid request from a victim, a platform must remove the content within forty-eight hours and make reasonable efforts to eliminate known copies. The Federal Trade Commission is designated as the primary civil enforcement authority, and failure to comply with takedown obligations may constitute an unfair or deceptive act or practice under the FTC Act.
The Act attracted criticism from civil liberties organisations, including the Electronic Frontier Foundation, the Centre for Democracy and Technology, and the Freedom of the Press Foundation, who raised concerns about the potential for overbroad takedown requests to chill lawful speech, and about whether the forty-eight-hour removal window would, in practice, become an instrument of censorship rather than victim protection[7]. These concerns have acquired practical significance. Critics argue that any system that permits individuals to demand the removal of content within forty-eight hours creates structural incentives for abuse, particularly in contexts such as political satire, where the line between protected expression and harmful deepfakes is genuinely contested.
B. State Legislative Innovation
The pace of state-level legislative activity has been extraordinary. Since 2022, 170 laws specifically targeting synthetic media have been enacted across American jurisdictions, and in 2025 alone, 146 deepfake-related bills were introduced to state legislatures.
Pennsylvania’s 2025 Act 35, signed by Governor Josh Shapiro on July 7, 2025, established criminal penalties, including third-degree felony charges, for creating or distributing deepfakes with fraudulent or injurious intent. The Act specifically targeted schemes involving AI-generated voice clones used to deceive elderly victims, a growing vector for financial exploitation[8]. Notably, Pennsylvania’s statute provides a defence for content bearing a disclaimer identifying it as fabricated, and carves out protection for satire, commentary, and content in the public interest.
Washington State’s House Bill 1205, effective July 27, 2025, criminalised the intentional use of a “forged digital likeness” encompassing synthetic audio, video, or images when deployed with the intent to defraud, threaten, intimidate, or harass another.
Tennessee’s ELVIS Act, the Ensuring Likeness Voice and Image Security Act, enacted in 2024, extended civil and potentially criminal remedies to unauthorised AI replication of a person’s voice, reflecting intense lobbying by the music industry and broader entertainment sector. New York’s 2025 amendments added new civil remedies and registration requirements to protect individuals against unauthorised AI replication of their likenesses.
In California, Governor Gavin Newsom vetoed SB 11, the Artificial Intelligence Abuse Act, in October 2025, declining to impose consumer warnings on AI systems capable of creating deepfakes and expressing concern about the bill’s potential chilling effects on legitimate technological development. California’s AB 2839, enacted in September 2024, had already faced a successful legal challenge when a federal court struck down key provisions in August 2025, finding they conflicted with Section 230 of the Communications Decency Act and raised serious First Amendment concerns. Minnesota’s analogous statute has been challenged by X (formerly Twitter), with early rulings suggesting judicial scepticism of sweeping political deepfake prohibitions.
C. The International Dimension
The European Union’s Artificial Intelligence Act, which entered into force in 2024 with key provisions taking effect in 2025, explicitly prohibits the most dangerous forms of AI-based identity manipulation and mandates transparency labelling for AI-generated content[9]. The EU framework complements the General Data Protection Regulation’s existing protections, creating a layered regulatory environment that obliges platforms operating in Europe to audit their deepfake-generation and distribution capabilities.
France amended its Penal Code in 2024 to criminalise non-consensual sexual deepfakes through the new Article 226-8-1, with penalties of up to two years’ imprisonment and fines of €60,000. Proposed 2025 legislation would further require platform labelling of AI-altered images, with fines of up to €50,000 per offence for non-compliant platforms, though, as of early 2026, the legislation had not yet been adopted.
The United Kingdom’s Online Safety Act 2023 had already made the sharing of intimate deepfake images without consent illegal; proposed 2025 amendments would extend liability to the act of creation itself, imposing up to two years’ imprisonment for intentionally crafting sexually explicit deepfake images without consent.
China has adopted a markedly different approach, mandating that all AI-generated content be clearly labelled both visibly and in metadata, restricting the generation of AI news content, and prohibiting unlicensed providers from publishing AI-generated material. Japan has criminalised non-consensual intimate imagery, whether real or AI-generated, extending criminal protection to personality rights under its private sexual content laws.
Landmark Case LAWS
A. United States v. Strahler
In April 2026, James Strahler II, a thirty-seven-year-old man from Ohio, became the first person convicted under the TAKE IT DOWN Act in what the Department of Justice announced as a landmark enforcement action[10]. Strahler had used AI technology to create non-consensual intimate images and videos of both adult and minor victims drawn from his local community.
According to the DOJ’s account, Strahler used images of boys he knew and morphed their faces onto the bodies of adults or other children to create material depicting them engaged in sexual acts with family members. He also sent messages to at least six adult female victims containing both real and AI-generated nude images of them, and created at least one AI-generated video depicting an adult victim engaged in sex acts with her father, which he then circulated to the victim’s co-workers.
Strahler pleaded guilty to three offences; cyberstalking under 18 U.S.C. § 2261A; producing obscene visuals of child sexual abuse material; and publishing digital forgeries under the TAKE IT DOWN Act. He was arrested in June 2025, shortly after the statute’s enactment. First Lady Melania Trump, who had been a public advocate for the legislation, issued a statement expressing pride in the conviction.
The Strahler case is significant for several reasons beyond its historic status as the first TAKE IT DOWN Act conviction. It demonstrates the willingness of federal prosecutors to charge deepfake offences aggressively under multiple overlapping statutes, maximising both exposure and deterrent effect. It also illustrates the spectrum of deepfake harm from child sexual exploitation to workplace harassment of adult victims and the capacity of a single perpetrator operating with consumer AI tools to harm a community of victims simultaneously.
B. The Biden Robocall: State v. Kramer and FCC v. Lingo Telecom
The prosecution arising from the January 2024 New Hampshire primary robocall represented the first major criminal proceeding in the United States targeting AI-generated electoral interference. Steve Kramer, a Democratic political consultant, directed the creation and distribution of robocalls using a clone of President Biden’s voice, urging Democratic voters not to participate in the state’s primary election.[11]
New Hampshire prosecutors filed twenty-six criminal charges against Kramer across four county indictments, thirteen counts of felony voter suppression, and thirteen counts of misdemeanour impersonation of a candidate. The Federal Communications Commission separately issued a $6 million proposed fine against Kramer for violations of caller-ID authentication laws[12]. The carrier that transmitted the calls, Lingo Telecom, reached a $1 million settlement with the FCC, the first such settlement of its kind and agreed to implement enhanced Know Your Customer and STIR/SHAKEN caller ID authentication protocols.
The Kramer case exposed a significant doctrinal gap. The charges filed were based on New Hampshire state statutes drafted long before the AI era, requiring prosecutorial creativity to apply provisions governing candidate impersonation to a scenario in which the impersonation was carried out by machine learning rather than a human actor wearing a disguise. The case also raised unresolved questions about the legal status of a political operative who creates a deepfake ostensibly as a “stunt” to draw attention to the dangers of deepfakes, a defence Kramer himself publicly articulated, claiming he acted to expose the vulnerability of democratic processes.
C. The Arup Deepfake Heist
The $25.6 million Arup deepfake heist in Hong Kong has become the defining corporate case study in AI-enabled fraud, yet it has also exposed the profound limitations of existing law enforcement responses to deepfake crime when it crosses international borders. Hong Kong police confirmed the arrest of six individuals in connection with the scheme, but as of early 2026, no prosecutorial outcome has been publicly confirmed, and the masterminds behind the deepfake generation remain unidentified.
The case raises acute questions about corporate criminal liability that no jurisdiction has yet resolved. When an employee is deceived by a deepfake into authorising a fraudulent transfer, does the company bear any legal responsibility for failing to implement reasonable anti-deepfake verification protocols? Several civil suits in the United States and the United Kingdom have begun to probe this question, and financial regulators in both jurisdictions have signalled that guidance on AI fraud prevention obligations for regulated entities may be forthcoming.
Critical Gaps and Unsolved Problems
A. The Mens Rea Problem
Criminal law’s foundational requirement of a guilty mind, mens rea, poses persistent difficulties in the context of deepfakes. Most deepfake statutes require proof that the defendant acted “knowingly” or “intentionally,” but the diffusion of deepfake creation capabilities across consumer AI platforms has created scenarios in which individuals may generate or distribute synthetic media without full appreciation of its fabricated nature, particularly as AI-generated content becomes visually indistinguishable from authentic material.
Pennsylvania’s Act 35 partially addressed this by adopting a “reasonably should have known” standard for cases in which a party distributes material that is a forged digital likeness, a negligence-adjacent formulation that represents a significant departure from traditional criminal intent requirements. Whether this innovation survives constitutional scrutiny and whether other jurisdictions will adopt comparable standards remains to be seen.
B. Platform Liability and Section 230
Section 230 of the Communications Decency Act, 47 U.S.C. § 230, provides broad immunity to interactive computer services for content created by third parties. While the TAKE IT DOWN Act creates a specific carve-out from Section 230 immunity for platforms that fail to implement compliant notice-and-takedown systems, the constitutional and doctrinal boundaries of this carve-out remain largely untested in federal courts. California’s experience with AB 2839, portions of which were struck down in August 2025 precisely because they conflicted with Section 230, illustrates the ongoing difficulty of imposing platform liability for deepfake content without running afoul of a statute never designed for AI-generated content.
C. The Jurisdictional Nightmare
Deepfake crimes are inherently transnational. The victim may be in Florida, the deepfake may have been generated by AI tools hosted on servers in Ireland, the perpetrator may be operating from Romania, and the distribution platform may be incorporated in the Cayman Islands. Existing mutual legal assistance treaty frameworks were designed for a world of physical evidence and slower-moving investigations, and they have proven inadequate to the pace and scale of AI-enabled crime.
The FBI’s 2025 Internet Crime Report noted that Americans over sixty reported approximately $7.7 billion in losses to fraud in 2025, a 37% increase from 2024, with AI-related scams among the costliest categories. Yet the cross-border nature of most deepfake fraud operations means that prosecution rates remain extremely low relative to the volume of reported offences. The absence of a coordinated international treaty framework for AI-enabled crime is perhaps the most significant structural gap in the current legal landscape.
D. The First Amendment and Political Deepfakes
The tension between deepfake regulation and the First Amendment’s protection of political speech is acute and far from resolved. As of January 2026, twenty-eight states have enacted laws addressing deepfakes in political communications, but most impose disclosure requirements rather than outright prohibitions. Courts have been sceptical of sweeping bans, the federal challenge to California’s AB 2839, Minnesota’s ongoing litigation with X (formerly Twitter), and academic commentary have all converged on the view that the First Amendment poses a serious obstacle to content-based prohibitions on political deepfakes, even those that are deeply deceptive.
The dilemma is real. Political satire, including fabricated videos of political figures doing or saying absurd things, has a long and protected history in American democracy. A statute broad enough to prohibit genuinely dangerous electoral deepfakes will almost inevitably sweep within its scope content that the First Amendment was designed to protect. The legislative challenge is to craft standards precise enough to avoid constitutional infirmity while robust enough to deter the kind of targeted electoral interference represented by the New Hampshire robocall.
Toward a Coherent Framework
The current legal response to deepfake crime is characterised by energy, creativity, and genuine legislative commitment but also by fragmentation, doctrinal inconsistency, and a failure to engage systematically with the most difficult questions. What would a coherent framework look like?
First, effective deepfake criminal law requires a comprehensive federal statute that addresses not merely intimate imagery but the full spectrum of deepfake-enabled harm; fraud, electoral interference, and identity-based coercion. The TAKE IT DOWN Act, significant as it is, addresses only one category of harm. Pending proposals, including the NO FAKES Act, the AI Fraud Accountability Act, and the DEFIANCE Act, would together address broader categories of harm, but their passage remains uncertain.
Second, effective law requires workable mens rea standards that distinguish between the sophisticated perpetrator who constructs a deepfake attack with deliberate criminal intent and the platform that hosts user-generated synthetic content without actual knowledge of its harmful nature. Pennsylvania’s “reasonably should have known” formulation for distributors deserves careful study as a model.
Third, platform obligations must be clarified and standardised. The TAKE IT DOWN Act’s notice-and-takedown framework is a beginning, but it covers only intimate imagery and creates a compliance deadline that many platforms are still working to meet as of early 2026. A broader federal framework governing platforms’ obligations across all categories of harmful deepfake content, with safe harbours for good-faith compliance and clear penalties for willful noncompliance, is urgently needed.
Fourth, international coordination is indispensable. The Council of Europe’s Budapest Convention on Cybercrime provides a potential model for a multilateral treaty framework for AI-enabled crimes, but negotiations toward any such instrument are at an early stage. Bilateral law enforcement cooperation agreements, shared technical standards for deepfake attribution and watermarking, and coordinated extradition frameworks are the minimum necessary steps.
Fifth, and perhaps most urgently, the criminal justice system needs technical capacity that it largely lacks. As Deloitte’s Centre for Financial Services has projected that generative AI could enable fraud losses to reach $40 billion in the United States by 2027, the gap between the scale of the threat and law enforcement’s forensic and investigative capabilities is growing.[13] Federal investment in AI forensics training for prosecutors, investigators, and judges is as necessary as legislative reform.
Conclusion
The deepfake represents something qualitatively new in the history of crime, not merely a new tool for committing old offences, but a technology that undermines the basic epistemic foundations on which criminal law and democratic society depend. If any face can be fabricated, if any voice can be cloned, if any public figure can be depicted committing any act, then the evidentiary basis for criminal prosecution, the reputational anchors of public life, and the shared reality that social trust requires are all placed at risk.
The law’s response has been faster and more vigorous than sceptics predicted. The TAKE IT DOWN Act, the first federal conviction under its terms, the prosecution of electoral deepfake interference in New Hampshire, the proliferation of state statutes, and the emergence of international regulatory frameworks all represent genuine progress. They demonstrate that democratic institutions are capable of identifying this threat and beginning to respond to it.
But the gaps are real, and the stakes are high. The first conviction under the TAKE IT DOWN Act in April 2026 is a landmark, but it addressed conduct by a single man in Ohio. The Arup heist involved sophisticated criminal organisations operating across multiple jurisdictions, and those responsible have not been prosecuted. Deepfake electoral interference has been charged but not yet fully adjudicated on the merits. The children whose AI-generated abuse images circulate on the dark web in the millions are not being reached by the law at anything approaching the scale of the harm.
The legal framework being built in these early years of the deepfake era will shape both the technology’s development and its criminal exploitation for decades to come. Getting it right, precise enough to target genuine harm, restrained enough to protect legitimate expression, robust enough to deter sophisticated transnational actors, is among the most consequential challenges that criminal law faces in the twenty-first century.
- Final Year, B.A. LL.B Student, University of Allahabad
[1] FBI Internet Crime Complaint Center, 2025 Internet Crime Report (Apr. 2026); see also FBI IC3, 2025 Annual Report, available at https://www.ic3.gov
[2] Brightside AI, Deepfake CEO Fraud: $50M Voice Cloning Threat to CFOs (Oct. 2025),https://www.brside.com/blog/deepfake-ceo-fraud
[3] Internet Watch Foundation, Annual Report on AI-Generated Child Sexual Abuse Material (July 2024)
[4] 18 U.S.C. § 1343 (2018) (Wire Fraud); 18 U.S.C. § 1030 (2018) (Computer Fraud and Abuse Act); see generally HALOCK Security Labs, What Legislation Protects Against Deepfakes and Synthetic Media? (Mar. 2026), https://www.halock.com/what-legislation-protects-against-deepfakes-and-synthetic-media/
[5] 18 U.S.C. § 1343 (2018) (Wire Fraud); 18 U.S.C. § 1030 (2018) (Computer Fraud and Abuse Act); see generally HALOCK Security Labs, What Legislation Protects Against Deepfakes and Synthetic Media? (Mar. 2026), https://www.halock.com/what-legislation-protects-against-deepfakes-and-synthetic-media/
[6] Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, Pub. L. No. 119-___, 139 Stat. ___ (2025) (signed May 19, 2025); see Stack Cyber, Deepfake Legislation Tracker: Federal & State Laws (Feb. 1, 2026), https://stackcyber.com/posts/ai-deepfake-laws
[7] Electronic Frontier Foundation, Concerns Regarding the TAKE IT DOWN Act (2025); Center for Democracy and Technology, Statement on the TAKE IT DOWN Act (2025); see TAKE IT DOWN Act, Wikipedia, https://en.wikipedia.org/wiki/TAKE_IT_DOWN_Act
[8] Pennsylvania Act 35 of 2025 (formerly S.B. 649), 2025 Pa. Laws __ (signed July 7, 2025, effective Sept. 5, 2025); Press Release, Governor Josh Shapiro, Gov. Shapiro Signs New Digital Forgery Law (July 7, 2025), https://www.pa.gov/governor/newsroom/2025-press-releases/gov–shapiro-signs-new-digital-forgery-law/; see Crowell & Moring, Forged Faces, Real Liability: Deepfake Laws Take Effect in Washington State and Pennsylvania (Aug. 19, 2025), https://www.crowell.com/en/insights/client-alerts/forged-faces-real-liability-deepfake-laws-take-effect-in-washington-state-and-pennsylvania
[9] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) [2024] OJ L __
[10] Press Release, Dep’t of Justice, Ohio Man Convicted as First Person Under TAKE IT DOWN Act (Apr. 2026) [hereinafter DOJ Strahler Press Release]; Ohio Man Becomes First Person Convicted Under Federal Law Criminalizing Intimate Deepfakes, DOJ Says, NBC News (Apr. 2026), https://www.nbcnews.com/tech/security/first-person-convicted-law-criminalizing-intimate-deepfakes-rcna267236.
[11] The Rise of AI-Cloned Voice Scam, Am. Bar Ass’n, Voice of Experience (Sept. 2025), https://www.americanbar.org/groups/senior_lawyers/resources/voice-of-experience/2025-september/ai-cloned-voice-scam/ ; NBC Boston, NH Primary AI Deepfake Biden Robocall Source Identified (Feb. 7, 2024), https://www.nbcboston.com/news/politics/nh-election-investigators-giving-update-on-fake-biden-robocall/.
[12] Federal Commc’ns Comm’n, FCC Fines Man Behind Election Interference Scheme, FCC 24-104 (Sept. 26, 2024) (issuing $6 million proposed forfeiture order against Steve Kramer); Telecom Company Agrees to $1M Fine Over Biden Deepfake, NBC News (Aug. 21, 2024), https://www.nbcnews.com/politics/2024-election/telecom-company-agrees-1-million-fine-biden-deepfake-rcna167564; Criminal Charges and FCC Fines Issued for Deepfake Biden Robocalls, NPR (May 23, 2024), https://www.npr.org/2024/05/23/nx-s1-4977582/fcc-ai-deepfake-robocall-biden-new-hampshire-political-operative
[13] Deloitte Center for Financial Services, 2024 Survey on AI Deepfake Incidents (2024) (projecting generative AI could enable fraud losses reaching $40 billion in U.S. by 2027; noting 25.9% of surveyed executives reported deepfake incidents involving financial and accounting data); see also FinCEN, Alert on Fraud Schemes Involving Deepfake Media Targeting Financial Institutions, FIN-2024-Alert004 (Nov. 2024).





