Contact Center Pipeline April 2026 | Page 25

... ORGANIZA- TIONS SHOULD ASSESS THE SPECIFIC SKILLS THAT PREDICT WHETHER SOMEONE CAN BECOME PROFICIENT QUICKLY.

But the unintended consequence is a narrower applicant pool shaped by access and prior opportunity, rather than readiness to succeed.
The scenarios requiring candidates with effective AI literacy are not theoretical:
• An AI-generated call summary may sound polished, but it omits key details required for downstream resolution or compliance.
• In other cases, an AI knowledge suggestion might point to the wrong policy or apply the right policy to the wrong customer context, requiring the representative to recognize the mismatch quickly.
In both situations, the agent’ s job is no longer just following guidance. Instead, it is actively validating it.
This is why AI can raise, not lower, the skill requirements for many roles. In AI-enabled workflows, the representative becomes not only a customer advocate, but also a real-time quality control layer for automated support.
Many of these skills are developed through on-the-job exposure and coaching, particularly in environments that require navigating multiple systems, managing high interaction volume, and operating within tightly structured workflows.
Candidates who have not had access to those environments may possess the underlying capability to succeed but lack conventional signals of readiness. This can result in them unwisely being rejected.
THE FLAWS OF EXPERIENCE
As these requirements rise, hiring processes that rely heavily on resumes or prior job titles can unintentionally favor familiarity over potential. Resumes can indicate whether someone has been in these environments, but they rarely indicate whether someone was effective in them.
In other words, experience can be an imperfect proxy. It may screen out candidates with strong underlying capability who have not had the opportunities to show them, while screening in candidates whose prior exposure does not translate into strong performance.
Skills-based testing is one evidence-based alternative that reduces reliance on resume-based proxies by replacing them with direct evidence. Rather than inferring capability from job history, organizations can evaluate job-relevant behaviors directly through simulations, work samples, and structured assessments.
Further, and importantly, and back to the discussion of AI skills, the goal should not be to confirm full mastery of AI-enabled workflows before day one. In most cases, that mastery develops inside the job.
Instead, organizations should assess the specific skills that predict whether someone can become proficient quickly. Namely learning agility, process discipline, attention to detail, and the ability to verify and apply information accurately in real time.
The hiring goal is not to find candidates who have already mastered the workflow. It is to identify candidates who have the prerequisite skills to master it quickly. However, the result of relying on experience is not necessarily better selection, but narrower selection based on background rather than capability.
In practice, that often means favoring those who have had prior exposure to emerging tools over those who demonstrate the judgment required to use them well, particularly when organizations treat AI familiarity as proof of AI capability.
The question is: who does that leave out? Those candidates whose capability exceeds their resume.
When hiring systems rely heavily on
CONTACT CENTER SKILLS
experience thresholds, credential requirements, and subjective notions of readiness, they systematically disadvantage people who have the underlying skills to succeed but have not yet had access to the environments that signal those skills in familiar ways.
Here are two examples:
1. Candidates who learn quickly, apply information accurately, and perform well under pressure, but who lack the conventional markers that screening systems are designed to recognize.
In practice, this often includes early-career candidates, people from non-traditional backgrounds, career switchers, and those whose prior roles did not carry the right former job titles, even when the work itself developed relevant capability.
2. Candidates who communicate differently in interviews or who do not conform to informal expectations of polish, despite being highly effective once expectations and workflows are clear.
The common thread is not lower ability but limited access to prior opportunity.
As skill expectations continue to rise faster than labor supply, excluding these groups is not just an equity concern. It is a capacity problem and organizations risk choking off the very pipeline they need to sustain performance over time.
The question is not whether standards should rise. They will. The real risk is mistaking exposure for capability and narrowing the talent pool based on signals that do not predict success.
April Cantwell, Ph. D., is Director of People Science at Harver, where she helps organizations turn hiring data into better decisions and better outcomes. For more than 20 years, she has worked at the intersection of applied research and real-world talent strategy, specializing in assessment design, workforce analytics, and practical, evidence-based hiring.
APRIL 2026 25