Customer Service & AI: A Leadership Failure, Not a Technology One.
(5 Min Read)
Hi, it’s David Lambert, and welcome to The Business Growth Blueprint, my weekly newsletter where I delve into the critical elements of business growth—strategy, leadership, operations, and the technologies shaping tomorrow. Subscribe to join 2,400+ readers who get The Business Growth Blueprint delivered to their inbox every week.
Introduction
Customer service in the past did not feel as broken as it does today. It’s almost as if companies are trying to see who can provide the worst service. It’s terrible.
Early in my career, I worked on the strategy and launch of a new call center from the ground up. The work was not glamorous. We focused on “best practice” fundamentals: how quickly calls were answered, whether issues were resolved the first time, call handle time, and whether customers felt helped when the interaction ended. That discipline mattered. The center went on to win Best New Call Center in its category, not because of technology, but because of clarity around ownership, processes, and outcomes. We cared about what the customer experienced. The Center was not viewed as a cost center but as a differentiator in growing the business.
Later, I worked with a software company whose purpose was to do something that now sounds obvious: connect phone, email, text, and chat into a single view of the customer. Provide real-time intelligence on the customer experience and identify where that could be improved. At the time, that capability was novel and powerful. It allowed organizations to see customers as people, not just isolated tickets. Demand for that capability was so strong that the company was ultimately acquired.
In both cases, customer experience was viewed and treated as a differentiator. Leaders invested in it. Systems were designed to support judgment. Technology existed to enable better service, not avoid it.
That mindset feels increasingly rare today.
Customer service is now one of the most frustrating parts of doing business, not because problems are harder to solve, but because organizations have optimized service for efficiency rather than resolution. AI was supposed to improve this. Things would move faster. Costs would come down. And the customer experience would improve.
The logic was compelling. Machines don’t get tired. They don’t lose patience, call in sick, or leave for competitors. With enough data and training, AI could answer questions instantly, resolve issues consistently, and scale service without friction, faster service, happier customers, and leaner operations.
What customers are getting today instead feels very different.
They’re trapped in automated loops. They repeat themselves. They struggle to reach a human (if at all) when the issue actually matters. Too many interactions end without resolution or clarity.
This didn’t happen because AI is incapable. It happened because leaders misunderstood what customer service is for and how to apply the technology, and they’re still getting it wrong.
Let’s dig in.
Please feel free to comment and subscribe - it’s free!
Customer Service Is a Moment of Truth, Not a Transaction
People do not reach out to customer support when things are working as expected. They reach out when something breaks, when money is at risk, when time has been wasted, when expectations were missed, or when they truly need help. These moments are rarely neutral. They are emotional. They are stressful. Sometimes they are urgent.
In those moments, customers are not looking for speed alone. They are looking for judgment, empathy, and accountability. They want to know that someone understands the situation and is willing to own the outcome.
Those qualities aren’t technical features. They’re leadership behaviors, embedded or neglected through systems.
And when those behaviors are missing, the damage goes well beyond lost revenue.
Consider Air Canada. Its chatbot told a grieving grandson that he could retroactively apply a bereavement fare after purchasing full-price tickets. That information was wrong. When the airline refused to honor the refund, a tribunal ruled Air Canada liable for the chatbot’s misinformation.
The airline’s defense? The chatbot was a “separate legal entity,” responsible for its own actions.
This wasn’t a technology failure. It was a leadership failure, an attempt to outsource judgment, empathy, and accountability to a system without owning the outcome.
AI Entered Through the Wrong Door
Instead of starting with the customer’s experience, most organizations began with internal economics. The dominant questions were not “Where does AI improve trust?” or “Where does automation actually help customers?” They were “How many contacts can we deflect?” and “How much labor can we take out?”
From that framing, the outcome was predictable.
AI was positioned as a gatekeeper, the first line of defense between the customer and a human being. Escalation paths were buried. Human judgment was treated as inefficiency rather than value. Customers were encouraged to rephrase their questions rather than resolve problems.
On paper, efficiency improved. Dashboards lit up with shorter response times and lower volumes. In reality, trust eroded.
Organizations invested $47 billion in AI initiatives in the first half of 2025.
Yes, $47 billion.
Yet nearly 90% of that spend produced minimal returns. Not because the models failed, but because the implementations did. Projects collapsed under compliance constraints, organizational fragmentation, and the messy reality of customer interactions that don’t neatly fit into decision trees.
A recent example makes the risk concrete.
Cursor’s AI support agent, “Sam,” fabricated a completely fictional policy stating that developers were limited to one device per subscription due to “security features.” The policy did not exist. The hallucination spread quickly across developer forums and social channels, triggering subscription cancellations and a viral backlash before the company could intervene.
The damage wasn’t caused by AI making a mistake. It was caused by deploying AI as an authority without guardrails, ownership, or accountability.
Automation Scaled Broken Service Models
Many customer service organizations were already fragile before AI arrived. Knowledge bases were outdated. Policies were rigid and internally focused. Frontline employees lacked the authority to resolve issues end-to-end. Ownership was fragmented across teams and systems.
AI learned those systems exactly as they were.
When you automate a broken process, you do not create efficiency. You create faster failure. You remove friction for the organization while increasing friction for the customer, and you do it at scale.
Speed became the wrong proxy for success. First-response times dropped. Throughput increased. Contacts per agent declined. These metrics looked impressive in executive reviews.
But speed is not resolution.
A fast answer that does not solve the problem is worse than a slower one that does. Customers do not remember how quickly you responded. They remember whether you took ownership, whether the issue was resolved, and whether they felt respected in the process.
AI optimized for motion. Customers needed progress.
What makes these failures especially damaging is the emotional context in which they occur. When an AI delivers false information with confidence, it does more than fail to resolve the issue. It violates trust at the exact moment trust matters most. And the consequences rarely stay contained. Broader research continues to show that customer support depends on empathy, nuance, and situational judgment, capabilities that AI still struggles to replicate reliably.
But where do we go from here?
The Real Failure Is Leadership, Not Technology
In many AI-driven service systems, escalation is deliberately buried, treated as an exception, a breakdown, or a last resort. Customers are pushed toward articles they’ve already read, asked to rephrase questions they’ve already asked, or offered partial answers that sidestep the real issue.
That design choice sends a clear message, whether intended or not: the system exists to manage customers, not to help them.
Leaders also underestimated the psychology at play. Customers tolerate automation when the stakes are low. But when money, fairness, or time is on the line, they want agency. They want discretion. They want a capable human who can exercise judgment.
AI struggles in these moments not because it can’t generate language, but because it lacks lived context. It cannot truly assess fairness. It cannot recognize when bending a rule is the right thing to do. And it cannot be held morally accountable for the outcome.
The result is not always immediate backlash. More often, it’s quiet disengagement. Customers stop calling. They stop complaining. They stop giving the company the benefit of the doubt.
They leave later.
At its core, this is not a technology failure. It is a leadership failure.
What leaders underestimate is that customer service is where brand promises are tested in real time, where strategy meets reality.
The organizations getting this right are not abandoning AI. They are redesigning how it is used. They make escalation fast and obvious. They define clear boundaries between automation and human judgment. And they hold leaders accountable not just for efficiency, but for resolution quality and customer confidence.
Most importantly, they accept a difficult truth: some interactions should never be optimized purely for cost.
The real value of AI is not replacing human judgment, but amplifying it. And consumers understand this intuitively. A majority still prefer human engagement for customer support, not out of resistance to technology, but out of disappointment with how AI has been deployed. The issue isn’t that customers don’t want automation. It’s that they don’t want to be abandoned by it.
88% of contact centers use AI-powered solutions, but only 25% have fully integrated automation into daily service operations, indicating a gap between deployment and effective use.
Swedish fintech Klarna made a very public push to automate customer support with AI, claiming its systems could replace the equivalent of hundreds of human agents. However, customer satisfaction dropped, and service quality suffered, prompting leadership to pivot: rather than eliminating human support, Klarna began reassigning internal staff (engineers and marketers) into customer support roles, acknowledging that AI had been over-relied on and underdelivered for complex customer interactions.
What Leaders Should Do Now
Fixing customer service in an AI-enabled world does not require abandoning technology. It requires changing how leaders think about responsibility, judgment, and success.
The result of getting this wrong is already clear: 53% of consumers actively dislike or hate AI-driven service interactions. I am one of those. My experience has been less than stellar.
However, I am not giving up. In my own business, we are utilizing AI thoughtfully, approaching it from the customer/client perspective and working backward. So what can you do as a leader in your organization?
First, redefine what “good” looks like. If your primary service metrics are speed, deflection, or cost per contact, you are measuring the organization’s convenience, not the customer’s outcome. Add metrics that reflect resolution quality: first-contact resolution, repeat contact rates, escalation effectiveness, and post-resolution confidence.
Second, make escalation a feature, not a flaw. Audit your service journey and ask how hard it is for a customer to reach a capable human when the issue actually matters. If escalation paths are hidden or delayed, redesign them. Remember, the value of AI is in enhancing human capability, not in always replacing it.
Third, clearly separate what automation should handle from what it should not. AI excels at low-emotion, repeatable interactions. It performs poorly when discretion, fairness, or empathy are required. Draw that boundary explicitly.
Fourth, empower frontline teams with real authority. If escalation only leads to another script or another handoff, trust erodes further. Variability in judgment is not a defect; it is often the cost of genuine service.
Fifth, hold leadership accountable for service outcomes, not just efficiency. What leaders measure signals what the organization truly values.
Conclusion: The Real Test
Customer service is no longer just an operational function. It is one of the clearest expressions of leadership a customer will ever experience.
AI did not ruin customer service. It revealed the trade-offs leaders were already making between cost and care, efficiency and ownership, and automation and accountability.
The organizations that will win over the next decade will not be the most automated. They will be the most intentional. They will understand where technology belongs and where people still matter most.
Because when something goes wrong, and eventually it always does, customers are not asking whether your system is efficient.
They are asking whether you are worth trusting. Are you?
Sources
Insights and data in this newsletter are drawn from publications and reporting, including:
Gartner — Customer preference and switching risk related to AI in customer service; difficulty reaching human agents as a top concern
Gartner — Self-service resolution rates and limitations of automated support
Academic research on algorithm aversion and customer resistance to automated decision systems
Human–computer interaction studies on empathy gaps in chatbot-based service
Industry analyses on AI deployment failures in customer experience (McKinsey, CMSWire)
Public polling on declining trust in automated and AI-mediated digital interactions





