Contact Center Pipeline May 2025 | Page 40

One example of how sophisticated these fully automated Agentic AI agents have become is the new“ super connector” voice assistant Boardy AI. Here’ s how Boardy AI works:
• A user provides their phone number to an organization and then receives a call from Boardy.
• Based on the conversation, Boardy then searches the Boardy network to find investors or customers who may be a good networking fit and offers to introduce them via a“ double optin introduction.”
• Once accepted, Boardy connects the parties via email.
AI agents can do many helpful things for individuals and companies, from planning a complex travel itinerary to handling difficult customer queries across multiple channels. But as I will outline, in the hands of a threat actor, Agentic AI’ s real-time adaptability and hyper-realistic human-like voices are potent weapons.
AGENTIC AI THREATS
Contact center leaders should remember that, just like GenAI and AI more generally, Agentic AI is a tool. As such, what matters is what it can do, and where and how it’ s used by bad actors.
GenAI helps fraudsters refine their existing techniques. ChatGPT, deepfake generators, and other GenAI tools make it easy to create convincing phishing attacks and realistic deepfake impersonations. Agentic AI makes it easy for fraudsters to scale their attacks; imagine an army of AI agents deployed to nefarious ends. Combined, Agentic AI and GenAI deepfakes give fraudsters new“ superpowers.”
To protect against Agentic AI threats, start by understanding its current and near-future capabilities. For example, for many years now fraudsters have been selling automated Phishing-as-a-Service( PhaaS) bots. The latest generation of Agentic AI can be thought of as combining PhaaS with GenAI deepfake technology.
Deepfake AI agents can impersonate employees, executives, and customers with a level of realism that ' s virtually impossible for a human to detect, even
40 CONTACT CENTER PIPELINE with awareness training. The Federal Communications Commission( FCC) has explicitly warned that“ deepfake audio and video links make robocalls and scam tests harder to spot.”
But Agentic AI goes farther and deeper. A recent study found that Agentic AI can convincingly“ replicate the attitudes and behaviors” of humans. For example, Agentic AI could create personalized, automated phishing attacks using legitimate customer interaction histories from call centers. Here’ s how:
• An AI agent might call customers and impersonate your organization to build trust and persuade them to disclose their credentials.
• If a phishing call or email is ignored, the AI might follow up on its own with a more urgent tone.
This scalable scam turns isolated fraud attempts into a systemic threat, overwhelming contact center fraud detection systems. And because Agentic AI has memory, it can learn to improve its phishing approaches or messages based on prior interactions with targets.
Worse yet, we’ ve even seen Agentic AI using its own language. During a demo at a recent hackathon, one AI agent calls another to make a hotel reservation. The two AI agents recognize each other and switch to“ jibber,” a high-frequency communication that’ s 80 % faster than speech yet sounds like an old-school dial-up modem.
The implications for customer communications are staggering. At some point in the not-so-distant future, one AI agent, acting on behalf of a customer, could“ talk” to another AI agent representing a company in a way that’ s unintelligible to humans.
This invites the potential for untold " secret " communications that the customer did not intend, or that may even be harmful to the customer. Such as an AI agent agreeing to a lower refund or authorizing a payment the customer does not wish to make.
CREATE YOUR AGENTIC AI RISK MATRIX
Once you have a general understanding of what Agentic AI can do, map out the ways in which a bad actor could use it to attack your contact center. Then create a risk matrix which shows the severity of the potential threat against its likelihood of occurring.
From this matrix, you can create a plan for remediating Agentic AI threats. Incorporate both proactive mitigations and reactive detections. Combine employee training, software tools, and business processes to build a robust program following defense-in-depth principles( AKA the“ swiss cheese” methodology).
DEFENDING AGAINST AGENTIC AI AND DEEPFAKES
The average contact center agent may have no idea that deepfakes have become so sophisticated. A good starting point, therefore, is awareness training. Many vendors offer pre-built awareness training modules which cover various deepfakes and AI threats. Or you could create your own training using publicly available deepfake tools.
But awareness training alone is not enough to effectively counter Agentic AI and deepfake threats; one meta-analysis found that“ Overall deepfake detection rates( sensitivity) were not significantly above chance.”
To properly defend against today’ s deepfake-powered Agentic AI attacks, contact center leaders must deploy the latest in deepfake defense technologies.
Many available deepfake detection tools for contact centers rely on AI models trained to detect AI-generated voices. But this approach creates an arms race, in which defenders are always on their back foot: the defending AI models need to be trained on examples of attacking AI content, meaning that the attackers are always one step ahead.