Contact Center Pipeline May 2025 | Page 37

When it comes to AI-enhanced fraud perpetrated through the voice channel, we see several emerging trends that are profoundly impacting the contact center’ s performance and security. These threat trends can be viewed through three lenses:
1. The proliferation of attacks
VOICE SECURITY

... PHONE-BASED FRAUDSTERS... CAN NOW ATTACH THOSE [ DIGITALLY ALTERED PHONE ] NUMBERS TO MULTIPLE FALSE IDENTITIES.

2. The potency of attacks 3. The profile of attackers
PROLIFERATION OF ATTACKS
We know that customer service-oriented contact centers, with their ready access to account credentials and personal data, are particularly attractive targets for criminal intruders seeking to exploit the human vulnerabilities of live agents.
We also know that the voice channel, in particular, has emerged as a favored conduit for AI-enhanced cyberattack activity.
In its“ 2024 Voice Intelligence and Security Report”, voice authentication solution provider Pindrop reported that, from 2022 to 2023, the rate of fraudulent calls into contact centers surged by 60 % with no abatement in sight.
It’ s hard to overlook the fact that this sudden escalation emerged shortly after GenAI synthetic content development applications made their 2022 debut.
POTENCY OF ATTACKS
Already, 85 % of enterprise cybersecurity leaders now say recent cyberattacks are being augmented by GenAI applications.
According to experts monitoring activity on the dark web, cybercriminals are using AI-based tools to:
• Quickly gather intel on high-value targets.
• Identify vulnerabilities in specific organizations for more precise attack targeting.
• Harvest and organize a portfolio of personally identifying information( PII) used to create false identities and defeat traditional caller authentication practices.
• Generate interactive scripts, deepfake images, and cloned voices used to advance criminal deceptions.
• Write detection-evading code for malware.
What’ s more, according to a recent report on cybersecurity news site DarkReading, easy access to open-source GenAI technology has spawned an underground factory for bootleg applications unrestricted by laws or regulations. Much of that activity is now centered around applications designed to create artificial content for criminal exploitation.
There can be little doubt about the intention of applications marketed on the dark web with names such as:
•“ EvilProxy”( a kit used by hackers to bypass standard multi-factor authentication [ MFA ] security protocols).
•“ FraudGPT”( a malicious chatbot that not only creates interactive content for phishing / vishing( voice phishing) attacks but also malicious codes for use in ransomware attacks).
For a small subscription fee, any cyberthief wannabe can now be handed a toolkit used to launch sophisticated cyberattack campaigns from their personal computer. No special skills required.
PROFILE OF ATTACKERS
Not only are we seeing new AI-generated threat tactics expanding exponentially, so, too, are its legions of criminal practitioners.
Wide-scale social engineering and vishing campaigns that used to require the expertise of large criminal gangs like REvil, Black Cat, and Scattered Spider, can now be launched by lone individuals leveraging illicit, AI-based hacker toolkits.
While it’ s long been commonplace for phone-based fraudsters to hide their identities behind spoofed( digitally altered) phone numbers, they can now attach those numbers to multiple false identities.
And, with just a bit of information about the people they are imitating plus short audio clips lifted from social media, threat agents can use AI models to not only write convincing, interactive scripts that resemble trusted sources, but they can actually sound like those sources.
The value of that to a criminal impostor is immense. And it’ s no longer just theoretical.
There are, in fact, already more than 350 GenAI-empowered tools dedicated to voice cloning alone, with each new generation improving on the quality of the last.
REGULATORS SOUND THE ALARM
While U. S. federal agencies have taken note, it appears there is little they can do beyond posting advisory alerts.
In an October 2024 Industry Letter addressing statewide financial organizations and affiliates, the New York State Department of Financial Services( NYDFS) warns of the imminent threat posed by criminal agents utilizing the power of AI in their schemes.
The letter details how criminal access to AI applications has amplified the effectiveness, scale, and speed of their cyberattacks, particularly on financial institutions. It also points out that AI-generated audio, video, and text(“ deepfakes”) are making it harder to distinguish legitimate customer callers from impostors.
MAY 2025 37