4. Keep AI agents separated. Not every bot should be able to talk to every other bot. By isolating them into specific roles, you reduce the risk that a simple agent can trigger potentially dangerous administrative actions.
5. Detect unusual AI agent activity.
Detection is the final safety net. Just as conversations between humans are contextual and can differ from person to person, so will user conversations with AI agents. No two conversations may be the same.
Therefore, manually attempting to unpack all of the context between AI interactions and make a determination on whether they are dangerous or not can be a herculean task.
Organizations should instead focus on extending the scope of their existing automated detection capabilities to tackle this.
Data points such as the role of the AI agent, the user’ s“ job” within the organization, and a conversation ' s history in totality should be seen as essential during an automated investigative process.
SECURING CRM AI’ S FUTURE
AI accelerates the pace of customer support operations, but it also increases risk. The challenge is not whether to use AI, but how to use it safely.
CRM AI should be viewed as both a user and as a system that requires constant oversight. Each agent should have narrowly defined access and clear boundaries.
CONTINUED FROM PAGE 17
CRM AI
CRM AI SHOULD BE VIEWED AS BOTH A USER AND AS A SYSTEM THAT REQUIRES CONSTANT OVERSIGHT.
When AI is kept within those guardrails, it can deliver real value without putting customers or the business at risk. The goal is not to slow innovation, but to ensure that as AI runs faster, it runs safely.
Aaron Costello is chief of Security Research at SaaS and AI security company, AppOmni. He leads ground-breaking initiatives to fortify cloud landscapes against threats, shining a spotlight on the intricacies of SaaS security within major platforms including Salesforce and ServiceNow. He ' s unveiled vulnerabilities, shared crucial insights, and shaped best practices.
AI-DRIVEN RISKS
LOOKING AHEAD
The evolution of eCommerce agentic AI is still in its early stages. Vendors and researchers continue to explore frameworks like OpenAI’ s Agentic Commerce Protocol, which promises to connect AI assistants to merchant catalogs and checkout flows.
While these systems may streamline shopping, they also reduce the visibility of underlying transactions, requiring merchants and contact centers to rethink their approach to risk and customer support.
Ultimately, the organizations that succeed in this new era will be those that anticipate the technology’ s disruptive effects, equip their teams with transaction and risk data along with AI-driven tools and insights, and adapt workflows to both mitigate risk and preserve customer experience( CX).
For contact centers, this means balancing speed and security, leveraging AI to support staff rather than overwhelm them, and treating AI-driven transactions not as anomalies but as the new normal.
ULTIMATELY, THE ORGANIZATIONS THAT SUCCEED IN THIS NEW ERA WILL BE THOSE THAT ANTICIPATE THE TECHNOLOGY’ S DISRUPTIVE EFFECTS...
Agentic AI is changing the rules. By understanding the technical shifts, anticipating spikes in consumer inquiries, and integrating fraud and contact center strategies, businesses can navigate this transition with confidence: keeping both risk and customer satisfaction under control.
Leading product management at Riskified, Shahar Yaari drives innovation for the company’ s suite of fraud and risk intelligence solutions that empower merchants to thrive in the ever-evolving omnichannel threat landscape. Prior to Riskified, Shahar has led product teams at NICE Actimize for over a decade, delivering risk and compliance solutions.
20 CONTACT CENTER PIPELINE