-- JAMES LAIRD
FEATURE
Most organizations run advanced cybersecurity: endpoint detection and response( EDR), security information and event management( SIEM), and zero‐trust. Yet their contact centers – handling millions of interactions – sit outside these defenses.
This disconnect makes contact centers vulnerable to attackers who socially engineer agents, bypass multifactor authentication( MFA), and / or extract personal data to penetrate wider systems.
AI‐powered deepfakes and synthetic identities amplify the threat:
• Attackers can now clone a customer ' s voice from seconds of publicly available audio.
• Synthetic identities blend real and fabricated data to pass traditional know-your-customer( KYC) checks.
The contact center is a prime target because emotional manipulation and urgency can override procedural safeguards.
An agent under pressure to deliver good CX shouldn ' t be expected to detect a convincing deepfake. Instead, AI-powered detection needs to sit alongside the threat, analyzing conversational patterns and behavioral signals in real time.
There are also " fattening the pig "— or " pig butchering”— scams where organized crime networks impersonate genuine contact center agents to build deceptive relationships with victims over time before executing large financial thefts. Cross‐interaction analysis can help uncover them early.
This is an arms race. Smart suppliers are investing heavily in R & D and working with best-of-breed partners to ensure customers benefit from layered, continuously evolving defenses.
Closing this gap means integrating contact center intelligence into the wider cyber ecosystem so that security operations center( SOC) teams, fraud teams, and contact center teams( including the agents) operate with shared insights.
LET ' S DISCUSS SPECIFICS. WHAT ARE, AND RANK, THE TOP FIVE RISKS FOR CONTACT CENTERS?
A: The top five risks for contact centers are:
1. Soft Fraud( First-Party Fraud). Genuine customers inflating insurance claims, filing false chargebacks, exploiting return policies, or exaggerating damages. Often rationalized as " harmless " but these cost billions annually.
2. Agent-Facilitated Hard Fraud. Social engineering attacks manipulating well-meaning agents into bypassing security protocols, enabling account takeovers( ATOs) or unauthorized transactions.
3. Credential Compromise. Mass ATO attempts using breached password databases, combined with social engineering, to bypass authentication.
4. Insider Threats. Employees accessing or modifying customer data inappropriately, either maliciously or through negligence.
5. Regulatory Noncompliance. Failure to detect and report fraud patterns leading to regulatory penalties and reputational damage.
" MOST ORGANIZATIONS RUN ADVANCED CYBERSECURITY... YET THEIR CONTACT CENTERS – HANDLING MILLIONS OF INTERACTIONS – SIT OUTSIDE THESE DEFENSES."
-- JAMES LAIRD
WHAT STRATEGIES ARE WORKING? WHICH ONES ARE NO LONGER EFFECTIVE?
A: The most effective strategies today are dynamic and AI‐driven:
• Conversational analytics can spot rehearsed narratives or inconsistencies in real time.
• Behavioral biometrics flag unusual intelligent virtual assistant( IVA) or navigation patterns.
• Agent‐assist tools surface fraud indicators instantly without disrupting customer flow.
• Cross‐interaction analysis uncovers patterns individual agents can’ t see.
What no longer works: knowledge‐based authentication alone, rigid verification scripts, and expecting agents to detect sophisticated threats unaided. Static, checklist‐based defenses simply can’ t keep pace with attackers who adapt quickly.
IS AI A SECURITY ASSET OR A THREAT? WHAT IS YOUR ASSESSMENT OF IT ON BALANCE?
A: AI is both, but on balance it’ s a powerful security asset when used responsibly. As I discussed in response to your first question, threat actors can now generate deepfakes or synthetic identities at scale. But defenders can analyze patterns, behaviors, and anomalies far faster than humans alone.
MAY 2026 7