A familiar voice used to mean safety. Now, it’s a weapon. In 2024, a Hong Kong finance worker wired $25 million to scammers who perfectly cloned their CFO’s voice on a video call. Deepfake voice fraud is no longer theoretical – it’s a full-blown crisis for call centres.
Recent reports confirm that deepfake fraud calls are rising sharply, with more than one-third of respondents in the US, UK, Canada, Germany, France, and Spain encountering a deepfake voice fraud call in the past year, and over 30% of those targeted suffering significant financial losses.
The urgency is clear: call centres are now a primary target for AI-powered fraud. The same report confirms that the average reported loss per victim of fraud calls in the US is $539, but deepfake fraud calls result in far greater financial damage, with more victims reporting losses exceeding $6,000 than those affected by traditional phone scams. Fraudsters now clone voices to impersonate loved ones, financial institutions, and government agencies, deceiving victims into handing over sensitive information.
Deepfake voice works by using AI to clone a person’s voice, with the recent advancements in generative AI making it possible to emulate the tone and likeness to an alarmingly accurate level. The scale of the threat is growing. In the first quarter of 2025 alone, 179 deepfake incidents were reported globally-a 19% increase over the total for all of 2024. Fraud accounts for 31% of all deepfake incidents, and call centres are frequently targeted by fraudsters using synthetic voices to manipulate customer service reps into granting account access. Nearly three-quarters (70%) of financial firms reported increased use of call spoofing in the call centre, and 60% indicated that most account takeovers now start in the call centre.
Three Ways Call Centres Can Defend Against Deepfake Voice Attacks
1. Security by Design
Traditional authentication methods, such as passwords, security questions, and one-time passcodes, are increasingly vulnerable to interception and social engineering. AI-powered voice technology offers a more robust solution, analysing unique voiceprints and speech patterns to verify identities in real-time. This technology can help detect and prevent unauthorised access, even when fraudsters use deepfake voices. Real-time fraud detection using voice biometrics is now critical for preventing financial losses and identity theft in call centres.
2. Radical Transparency
As regulations such as the EU AI Act come into force, organisations must disclose when customers interact with AI systems. Transparency about how voice data is used, how authentication decisions are made, and when AI is involved in the process is essential for maintaining customer trust. Clear communication and consent-driven processes for any voice data usage, including voice cloning for authentication, are recommended best practices.
3. Ethical Guardrails
While AI can automate many routine inquiries, human oversight remains essential for high-risk or ambiguous scenarios. Regular audits for bias and inclusivity in voice AI systems, escalation protocols for suspicious interactions, and simulated attack scenarios designed to stress-test the system are all critical components of a responsible security posture. These measures ensure that automation enhances, rather than undermines, both security and customer experience.
The Road Ahead: Trust as a Competitive Advantage
The rise of deepfake voice fraud is not just a technical challenge, it is a test of trust for the entire call centre industry. As AI-powered attacks become more sophisticated, organisations must adopt multi-layered, real-time security frameworks that go beyond outdated methods. Implementing AI voice biometrics, maintaining transparency, and enforcing ethical oversight are no longer optional – they are essential for safeguarding both customers and business reputation.
The technology to stop deepfakes already exists. What’s missing is the urgency. In a world where anyone’s voice can be faked, the brands that invest in trust will be the ones that win.