For years, the CX industry has been sprinting to deploy customer-facing AI. Teams raced to launch chatbots before they were ready, plugged voicebots into broken service journeys, and introduced automation that wasn’t integrated into the systems agents rely on every day. The result? AI is associated by too many with friction, confusion, dead ends, and customer frustration.
In trying to innovate fast, trust has been unintentionally damaged along the way, among customers, agents, and even senior leaders. And now, as AI is finally capable of truly intelligent, agentic, human-like support, CX teams must come to terms with the knowledge gap that’s at the root of distrust in AI. Chasing AI-powered efficiency, deployment has far outpaced education. As a result:
- Customers frequently don’t understand what modern AI can do.
- Agents often don’t know which tools even use AI.
- Many leaders are unsure how to measure value or secure buy-in.
The next phase of AI transformation won’t be about adding more tech, it will be about demystifying the AI already in place, so every stakeholder can trust it – and make the most of its incredible potential.
Customers Still Think AI = “That Bad Chatbot”
While AI has evolved, consumer perception hasn’t. For many customers, “AI” is synonymous with a frustrating chatbot, one that can’t understand context, can’t access their account information, and can’t assist with anything remotely complex.
Even today, consumers say the only thing AI does better than a human is respond faster. Faster… but not better.
In response, customers have learned the shortcuts. Many “game” the system by immediately typing “agent,” “human,” or “representative,” skipping the bot before it even has a chance to help. They’re conditioned not to trust it.
If customers don’t understand AI, and have only experienced the worst of it, why would they believe things have changed?
To shift customer attitudes, we need more than improved tech. We need transparency, expectation-setting, and subtle behavioral nudges that show – not tell – that AI can support them effectively.
But Here’s the Real Twist: Agents Don’t Understand AI Either
You would expect frontline agents, people who work with AI tools every day, to be the most informed. But the reverse is true.
Our Voice of the Agent report revealed that only 35% of agents say they’re clear on which of their tools actually use AI. Most interact with AI daily, yet they can’t pinpoint where it appears in their workflow or what value it delivers.
And yet… 44% of agents say AI is useful in their day-to-day tasks. They feel the benefits – reduced admin, better information, faster handling, but they rarely identify the source of those AI-driven improvements.
This lack of clarity fuels a bigger emotional contradiction:
- 55% of agents worry AI might change or replace their job
- 48% want more AI-powered tools introduced
This isn’t rejection. It’s curiosity without clarity. Agents are willing, even eager, to embrace AI, but they need reassurance, context, and education to feel confident doing so.
Generational Differences Matter: AI Isn’t One-Size-Fits-All
AI awareness is high, but comfort is uneven. Training and adoption must acknowledge these generational patterns:
Gen Z & Millennials: Enthusiastic but Anxiou
- Over 60% use AI tools like ChatGPT or Gemini (mostly outside of work).
- They’re optimistic and experimental.
- But 55% are also the most concerned about long-term job impact.
They’re eager to learn if employers give them pathways to do it safely.
Gen X: Cautiously Curious
- They see the benefits but struggle to understand the practical, daily application.
- Clear, hands-on learning is key.
Baby Boomers: Low Usage, Low Confidence
- Nearly six in ten don’t use AI tools at all.
- Many view AI as irrelevant to their role, creating distance and distrust.
The Common Thread: A Massive Training Gap
- 40% of agents haven’t received any AI training but want it.
- Another 33% haven’t, and don’t think they need it (usually because they don’t recognize where AI appears in their work).
- Leadership mirrors this trend: 59% of contact center leaders admit they don’t provide ongoing support to help agents navigate AI workflows.
This is a knowledge crisis, not a technology crisis. And knowledge gaps create fear.
Why AI Still Feels “Dangerous” to Agents
When agents do talk about AI, they often cite the worst examples, because those are the most visible:
“AI has actively made the job harder by assisting fraudsters and misinforming the public.”
“I don’t want to work for companies that use AI. It stresses me out and annoys customers.”
These statements don’t stem from the technology itself. They stem from poor implementation, lack of training, and zero transparency.
We put AI at the front of broken journeys, not behind the scenes where it excels.
Agents rarely see the embedded AI that improves routing, forecasting, knowledge management, or analysis. They only see the parts that frustrate customers, and then have to deal with the fallout.
AI doesn’t create distrust. Poor integration and poor explanation do.
To Move Forward, We Must Rebuild AI Confidence – Not Rebuild the Tech Stack
Modern AI is capable, it’s agentic, it’s contextual, and it can integrate deeply into systems. But none of that matters if the people who use it don’t trust it.
The priority now is education: helping customers, agents, and leaders understand what AI truly is, where it appears, and how it supports, not replaces, human capability.
Confidence doesn’t come from coding. It comes from coaching.
So, What Can Leaders Do?
Here are five practical ways to demystify AI and restore trust!
- Make AI visible – not mysterious: Show agents which tools use AI and what tasks those models handle. Label it. Highlight it. Celebrate it.
- Integrate AI literacy into onboarding & ongoing coaching: AI shouldn’t appear as a surprise. Bring it into training, QA, 1:1s, and development conversations.
- Give every generation a tailored, hands-on experience: Short demos for Boomers, deep-dive exploration for Millennials, sandbox creative sessions for Gen Z.Different comfort levels require different approaches.
- Redesign customer journeys to restore trust: Use nudges! Explain what the bot can do, offer choice between self-service and human support and show success metrics (“Our assistant solved this issue for 89% of customers today”). Trust builds when AI proves itself.
- Reframe AI as a support tool, not a threat: Celebrate the tasks AI removes: admin, after-call work, repetitive processes. Position AI as a shield that protects agents from stress, not a replacement.
The Future of AI in CX Will Belong to Educated Organizations
AI isn’t going away, but the organizations that thrive won’t be the ones with the most automation. They’ll be the ones with the most understanding, where customers trust the experience, agents trust the tools, and leaders can confidently invest because they know the value.
To rebuild trust, we must shift the conversation from “What AI should we deploy next?” to “How do we help people understand the AI they already use?”. That’s the real transformation, and it starts with education, transparency, and human-centered design.
To learn more about how agents really feel about AI, and how to support them through the next phase of transformation, download the full Voice of the Agent report here.



