Pracovník podpory (zdroj: chat GPT)
Pracovník podpory (zdroj: chat GPT)

If customer support followed the Pareto principle, it would look something like this: roughly 80% of queries are variations on 20% of topics. Where is my parcel? How do I reset my password? When will my invoice arrive? I want to cancel my subscription. These queries form the backbone of every contact centre, and they are precisely the ones now systematically disappearing from human agents’ workloads.

This isn’t speculation. According to a 2023 Gartner study, by 2027 generative AI will handle approximately one third of all incoming customer support requests without any human involvement (Gartner, Predicts 2024: Customer Service and Support, 2023). IBM estimates that chatbots and conversational AI already handle over 265 billion customer queries annually worldwide, saving businesses an estimated $11 billion (IBM, The Value of Chatbots, 2022). The data is consistent: the automation of routine customer support isn’t a gradual drift, it’s a structural shift.

The key question, therefore, is not whether AI will take over routine work. It’s what remains for people, and whether they are truly ready for it.

The anatomy of displaced work

To understand what’s happening, we first need to break down what customer support actually involves. In its 2023 analysis of job roles, McKinsey & Company found that approximately 58% of activities in customer-facing roles consist of tasks highly susceptible to automation: repetitive queries, standardised information retrieval, basic transaction processing, and straightforward escalations (McKinsey Global Institute, The Economic Potential of Generative AI, 2023).

This is territory AI handles well and fast. Modern language models deployed in contact centres, whether platforms like Salesforce Einstein, Zendesk AI, or proprietary solutions such as Amazon Connect, can detect customer intent, query CRM systems, generate personalised responses, and close cases entirely. All within a fraction of a second, in any language, without a coffee break.

The remaining roughly 42% of cases rely on something different: the ability to read emotions, improvise in non-standard situations, make judgement calls in grey areas where the rulebook offers no guidance, or handle a customer who doesn’t want a solution, they want to be heard. This is the space AI cannot yet reliably occupy. Yet.

Simple queries go to AI. What’s left for people?

The shift of routine tasks to AI doesn’t mean support staff have less work. It means the nature of their work changes fundamentally. Every case that reaches them today is, in its own way, an exception: something the system couldn’t handle, rejected, or escalated.

In the literature, this is known as complexity bias: the systematic routing of complex and emotionally demanding cases to humans, while automation absorbs the straightforward ones. The result is that the average difficulty of every human interaction rises. Agents who once spent 80% of their time on standard queries and 20% on complex cases are now facing the reverse ratio, or heading there fast.

Research from Forrester found that customers who reach a human agent today are, on average, more emotionally charged (read: frustrated or anxious) than before, because simpler versions of their problem have already been resolved either by themselves or through automation (Forrester, The Future of Customer Service, 2023). Agents aren’t just dealing with harder problems. They’re dealing with harder people.

What this means for agent skills

Here lies a systemic problem that is consistently overlooked in discussions about AI in customer support. Most companies deploy AI as the front line, reduce headcount in their support teams or freeze hiring, but fail to reconsider what skills they actually need from the agents who remain.

Yet one critical condition goes unaddressed: handling complex cases requires a different skill set from the one traditional contact centres have historically trained for.

Standard agent training is built around scripts. How to respond to scenario A, B, or C. How to escalate. How to close a case. These are skills suited to routine operations, not to navigating a situation that defies every template. According to Harvard Business Review, the most effective agents in complex interactions are those who can adapt their communication style in real time, read the emotional subtext of a customer’s situation, and make autonomous decisions with incomplete information (HBR, The High Cost of Ignoring Customer Emotions, 2022). These are not skills you can learn from a script.

At the same time, the technical profile of the role is evolving. Agents increasingly function as AI supervisors, reviewing automated outputs, correcting escalations, and determining when and why the system failed. This creates a hybrid role for which no standardised training framework yet exists.

The customer satisfaction paradox

CSAT (Customer Satisfaction Score) data comparing AI versus human interactions reveals an interesting paradox. For simple, transactional queries, AI solutions achieve comparable or even higher satisfaction scores than humans: customers want speed and accuracy, not conversation (Salesforce, State of the Connected Customer, 2023). But for complex or emotionally charged situations, the preference for a human agent remains strong: 71% of customers in a PwC survey said that in a difficult situation, they would prefer speaking with a person over a chatbot, even if the AI were faster (PwC, Experience is Everything, 2018, still widely cited as a reference point).

In other words, customers are willing to accept AI for things they don’t feel deeply about. But when it matters, when a parcel is lost three days before Christmas, when an insurance claim has been denied, when an order arrives damaged, they want a human. And that human needs to be good.

The real question, then, is not whether AI will replace customer support. It’s whether companies can ensure that the people who remain in support are equipped to handle precisely the situations where AI falls short.

What companies should do differently

The data here is clear: successful organisations don’t approach AI deployment in customer support as a cost-cutting exercise, they treat it as an opportunity to redesign the agent role entirely.

In practice, this means three things.

  • First, rethink training programmes, moving away from scripts towards situational thinking. The best contact centres are borrowing approaches from other disciplines: motivational interviewing techniques from clinical psychology, negotiation frameworks from the business world. The goal is to develop agents who can work effectively with ambiguity.
  • Second, design escalation protocols deliberately. That means not simply passing to humans whatever AI couldn’t handle technically, but defining which types of situations should be routed to a person from the outset, regardless of whether AI could technically process them. Emotionally charged situations, complaints with reputational potential, or customers at critical moments in their journey should, by design, receive human contact.
  • Third, treat agent wellbeing as a key metric. A role that consists entirely of complex, escalated cases is psychologically demanding. Gallup research consistently shows that customer support workers rank among the professions with the highest rates of emotional exhaustion, and this trend is likely to deepen as routine work shifts to AI (Gallup, State of the Global Workplace, 2023). Companies that ignore this will find they have invested in AI infrastructure while losing the very people meant to complement it.

The value of human contact is rising

There’s an ironic conclusion the data keeps pointing to: the more work AI takes on in customer support, the more valuable every remaining human interaction becomes. Automation doesn’t erode the value of human agents it amplifies it. But only if organisations deliberately invest in making their people exceptional precisely where AI falls short.

Companies that fail to grasp this will end up with a cheap AI front-end propped up by an under-skilled human back-end. The result? Customers who’ve fought their way through a frustrating automation loop, only to land with a support agent who can’t resolve their issue anyway. That’s not the future of customer experience. It’s just a rebranded version of its past.

Dan Bauer
Dan je náš investigativní AI novinář, využívající všemožné zdroje a AI k tomu, aby Vám články o CX poskytl v co možná nejvyšší kvalitě. Nikdy ho ještě nikdo neviděl, i když by každý chtěl.
Dan Bauer
Dan je náš investigativní AI novinář, využívající všemožné zdroje a AI k tomu, aby Vám články o CX poskytl v co možná nejvyšší kvalitě. Nikdy ho ještě nikdo neviděl, i když by každý chtěl.