Chatbots Might Infuriate Customers

Chatbots are increasingly common in customer service environments. Indeed, some estimates suggest that 95% of online customer service will be provided by them by 2025. Research from the Queensland University of Technology suggests that such a strategy is not without risks.

They found that while chatbots can be an effective medium for customer service, they can also infuriate customers, thus making them less likely to make their purchase and also generating a significant degree of anger.

Stoking anger

The researchers found that the chatbots are most likely to cause anger among customers if the customers are informed early on in the exchange that human intervention is available should it be needed. The authors believe that companies should heed this warning and rewrite their scripts accordingly.

“The capability for comprehending natural language and engaging in conversations allows chatbots to not only deliver customer services but also improve customer experiences through lowering customers’ efforts and allowing these customers to use time more efficiently elsewhere,” the researchers explain.

“However, despite theĀ economic benefits for companies using chatbots in service encounters, they often fail to meet customers’ expectations, can undermine the customer service experience and lead to service failures.”

Interpreting requests

The key is to ensure that chatbots are able to successfully interpret the requests customers have, which seriously undermines the ability of the chatbot to successfully respond to the customer in a meaningful way.

“Users can then feel frustrated and angry, become reluctant to use chatbots in the future, are less likely to make the purchase, or even switch to using another service provider entirely,” the authors continue. “We found that in a chatbot service failure context, telling a customer late in the service interaction that a human employee is available to help can lead to a greater chance of customer aggression.”

Historically, consumers have an inbuilt hope that humans will be on hand to help them with their inquiries, especially if the technology they’re currently dealing with isn’t up to the job. The researchers believe that if this availability to talk with a human is disclosed relatively late on in the conversation with the chatbot, this is most likely to induce anger and aggression in the customer.

“Unexpectedly though, we found that when customers perceive a high level of participation, the positive relationship became negative in that customers were more likely to react with emotion and aggression when the chatbot service failed, if they were offered to interact with a human employee early (compared to late) in the service interaction,” the researchers continue.

“This could be because customers with a higher level of participation often value relationship building during the service co-creation process, they may be more likely to desire interacting with a human employee. So, the early disclosure of the option to interact with a human employee may signal that a service provider has the human resources to support customers but does not value the customers enough to begin the interaction that way.”

So, if customer service teams to offer human assistance alongside their chatbot-based service, it seems that the best time to disclose this is early on in the interaction.

“In turn, service providers should design chatbot scripts that disclose the option of interacting with a human employee early in the customer-chatbot interaction, thereby making customers aware of the possible human intervention prior to the occurrence of chatbot service failures,” the authors conclude.

Facebooktwitterredditpinterestlinkedinmail