Generative AI is driving an unprecedented surge in sophisticated scams, exposing critical gaps in global fraud legislation.
Criminals now leverage AI to replicate voices, create convincing deepfake video calls, and construct synthetic identities using minimal online data. In May 2025, Google introduced AI-powered tools in Chrome to combat real-time impersonation scams and malicious pop-ups, highlighting the escalating threat.
A 2024 incident in Hong Kong underscores the severity: A finance employee at multinational firm Arup wired 40 billion by 2027. Despite this, U.S. laws remain outdated, lacking provisions to address AI-generated deception or streamline victim recourse.
“The legal system is unprepared for this scale of deception,” says Joseph Osborne of Osborne & Francis. “Victims face impossible burdens of proof, while laws still operate on 20th-century definitions.” Osborne advises businesses to implement strict transaction verification, train teams to detect AI anomalies, and consult legal experts immediately if fraud is suspected.
As AI scams grow more advanced, the legal community faces a dual challenge: closing today’s gaps while anticipating tomorrow’s threats.