Contact Center Pipeline May 2026 | Page 30

AI ADOPTION... IS A JOURNEY THAT REQUIRES MATCHING YOUR DEPLOYMENT MODEL TO YOUR SECURITY REQUIREMENTS.
Category One: Public Cloud
This configuration works well for organizations where hyperscaler security meets compliance requirements. The CX or help desk platform sends data to foundation models running in the public cloud and where AI processing happens outside the organization.
This option is the fastest way to deploy AI in your contact center, but it comes with the highest compliance risk for organizations with stringent data privacy or regulatory requirements.
Category Two: Virtual Private Cloud( VPC)
In this configuration, data transmission between the contact center and its CX or help desk platform is supported by an isolated private network set up within a public cloud provider’ s infrastructure.
It offers better security than the public cloud and meets compliance requirements for some, but not all, organizations in regulated industries.
For example, if the VPC spans multiple global regions, this would violate data sovereignty requirements. It would not work for companies that handle sensitive, personal, or regulated data that must stay within specific geographic boundaries. Examples include global banks and pharmaceutical companies, and government and public sector firms.
Communication to the foundation models on the public cloud provider’ s service can be supported by strong encryption, for example, accessing Amazon Bedrock LLMs from an AWS VPC via AWS PrivateLink.
This model can work for many organizations in regulated industries, including financial services firms and healthcare systems that can accept foundation models running in validated cloud environments.
The foundation model still operates in a public cloud service, but the encrypted transmission and private networking satisfy many security teams’ requirements.
30 CONTACT CENTER PIPELINE
Category Three: Sovereign Cloud
Many organizations are subject to strict data sovereignty rules where the data processed by their contact center cannot leave a specific geographic boundary. In these cases, both the CX / help desk platform and the LLMs must run within the sovereign environment.
With the high cost of deploying AI infrastructure, this is a major investment for private cloud providers. These may be country-specific( e. g.,“ Le Bleu” in France), region-specific( e. g., AWS EU Sovereign Cloud), or industry-specific( e. g., Scaleway’ s Sovereign Cloud for Healthcare and Life Sciences).
Category Four: Fully Private Deployment
Here, you are deploying both your CX / help desk platform and foundation models entirely within your own private environment, whether that’ s a private data center, colocation facility, or even air-gapped environment.
The three major hyperscalers – AWS, Microsoft Azure, and Google Cloud – also support local, private deployment with AWS Outposts, Microsoft Azure Local, and Google Distributed Cloud, respectively.
Also, commercial LLMs from organizations like Cohere and Mistral support on-premise and private cloud deployment. Meanwhile, open-source models such as Meta’ s Llama and DeepSeek offer opportunities to create custom, private LLMs.
The fully private configuration is required for organizations with the most stringent requirements. These include certain government agencies, aerospace and defense contractors, and any organization with extreme security mandates that prohibit any data transmission outside their own networks.
It enables AI integration without compliance violations because sensitive information never leaves the approved security perimeter.
While this deployment model typically requires greater infrastructure investment, it provides several critical benefits:
• Complete control over what data AI can access.
CONTACT CENTER SECURITY
• The ability to curate and validate training and retrieval data within a security perimeter.
• The ability to give AI chatbots access to complete customer context securely while maintaining audit trails.
• Control over physical infrastructure.
AI ADOPTION... IS A JOURNEY THAT REQUIRES MATCHING YOUR DEPLOYMENT MODEL TO YOUR SECURITY REQUIREMENTS.
DEPLOYMENT WITHOUT COMPROMISING SECURITY
In regulated industries, security and compliance requirements cannot be compromised for faster deployment.
Organizations seeing genuine return on investment from AI are taking the time to implement it properly. They address data governance, choose appropriate deployment architectures, build organizational readiness, and ensure AI has access to the right data within acceptable security parameters.
AI adoption in support operations is a journey that requires matching your deployment model to your security requirements.
Whether that’ s public cloud, virtual private cloud with encrypted access, sovereign cloud, or fully private deployment, the architecture you choose should enable AI adoption: without introducing unacceptable risk.
For contact centers in regulated industries, this often means seeking out solutions that respect their infrastructure decisions: rather than rushing to adopt cloud-only offerings that create security concerns, compliance violations, and data leakage risks.
Brad Murdoch is the CEO of Deskpro, a secure AI-powered help desk platform for customer and employee support. A veteran B2B tech leader, Brad has spent more than two decades helping companies scale from early-stage to commercial enterprises, guiding them through high-growth phases and successful acquisitions.