When I first applied for a mortgage, I was soon met with a storm of twenty to fifty calls a day and my inbox flooded with emails offering “special rates.” What I didn’t realize was that the hard credit inquiry the lender ran was logged with a credit bureau and then that information was sold to competing lenders. These competitors then launched a relentless campaign to get my business. Some even lied, claiming to work with Jeff, my broker. Eventually, I caught on, but the experience left me frustrated and disillusioned with the process. What’s worse is that this predatory behavior is legal.
Now imagine that same cycle, but with AI voice agents. What if “Jeff’s colleague” sounded so human, you’d never suspect deceit? Voice clones can A/B test scripts, refine white lies and adapt in real time. What was once an annoying human-led capitalistic competition has now become a cold, full-blown, automated systemic battle for our finances. Now imagine it’s an AI-operated scam.
What makes AI especially dangerous is its ability to mimic the cues we depend on for trust: familiar voices, recognizable scripts, caller IDs and even voice-based authentication can now be convincingly faked. Social engineering has always been the Achilles’ heel of security. Phishing, spoofing and impersonation thrive on the illusion of trust. Soon, AI-to-human interactions may outnumber online human-to-human interactions; and when those AI systems are used with malicious intent, the scale and impact of fraud can be staggering.
The FBI’s 2024 Internet Crime Report logged over $16 billion in losses, up 33% from 2023; the biggest areas were phishing/spoofing, extortion and personal-data breaches. In parallel, the FBI has warned of criminals using generative AI to facilitate fraud on a larger scale. Synthetic content; voice, images and text are increasingly used in investment scams, extortion and identity fraud. Additionally, the FBI warns that malicious actors have used text messages and AI-generated voice messages to impersonate senior U.S. officials. In one alert, it was reported that AI-based voice cloning surged by 442% between the first half of 2024 and the second half; the content of those impersonations is increasingly tailored, believable and dangerous.
Recently, at a Federal Reserve conference, the CEO of ChatGPT spoke out on this exact trend. He warned of a “significant impending fraud crisis” in banking, because AI voice cloning technology has advanced to the point where voice-based authentication is no longer safe. He noted that some banks still accept a voiceprint + challenge phrase as sufficient identity verification. “That is a crazy thing to still be doing,” he said. “AI has fully defeated that.” And these failing security measures of voice calls will soon be the plight of video or FaceTime, when fakes are indistinguishable from reality. Just check out the new Sora 2 release.
What’s most concerning is not just the fraud but the erosion of public trust. When you can no longer believe who you’re talking to (and soon who you’re looking at), every call becomes suspect. That’s a toxic shift for new home buyers, for consumers in general and the companies that want repeat customers. What we need now is urgency in regulation, in corporate responsibility and in public awareness.
Certain companies building AI voice systems are acutely aware of the oncoming wave of distrust and are embedding security from the start. They are designing systems that filter unsafe inputs, flag suspicious behavior in real time and ensure risky interactions shift to human operators. Others are experimenting with continuous voice authentication, liveness checks and on-device privacy protections to make it harder for deepfakes to slip through the cracks. But the race between fraudsters and security updates is moving fast and these defenses need to evolve as quickly as the attacks.
These are some examples of how businesses should begin thinking about fraud prevention. But it is also up to institutions, regulators and individuals to organize and take action against these emerging threats. Security needs to be embedded into systems from the ground up.
For institutions:
- Use Real-time AI/ML-based transaction monitoring and other tools.
- Collaborate across sector intelligence networks.
- Update identity verification systems.
- Ensure employee fraud awareness training.
- Incorporate voice-first security measures; such as continuous authentication, anomaly detection for speech and liveness checks to guard against AI-based impersonation.
For regulators:
- Require real-time monitoring and anomaly detection.
- Encourage Collaborative industry reporting (FINRA’s 2025 Oversight) and best-practice sharing.
- Enforce consumer protection and clear disclosure.
- Mandate voice authentication standards for deepfake and AI-generated speech threats.
For individuals:
- Engage in fraud awareness campaigns. (#BanksNeverAskThat)
- Learn verification habits.
- Use scam reporting and educational portals.
As fraudsters evolve with AI, so do we.
My mortgage experience is far from unique, our everyday lives are filled with societal annoyances. But with the advent of AI, these frustrations can become much more alarming. AI won’t merely flood your voicemail; it’ll undermine the institution of trust. Corporations, governments and citizens are responsible for what comes next. This is a pivotal moment. The future of trust is a choice we have to make. Let’s make the right one.