Implementing ChatGPT within a support team

blog image for Implementing ChatGPT within a support team with a purple background
Written by:
Picture of Natasha Ratanshi-Stein

Natasha Ratanshi-Stein , Founder & CEO

Ironically, a very small part of this blog post is written by ChatGPT.

We’ve discarded parts of what it composed for missing nuanced context but it captures a lot of the well known and more standard practicalities well. Obviously we’ve had to use our own expertise and learnings to add insights from having directly seen different companies work on ChatGPT implementation over the last two months.

The advantages of implementing ChatGPT for a support team

The most obvious benefit is reducing cost through using ChatGPT to automate more tasks, necessitating a smaller team with more uniform quality of responses.

This benefit is nuanced though as most of the heavy lifting ChatGPT is able to do is around context gathering rather than fully solving customer problems. What this means is that it improves metrics like first response time, time to resolution and cost per ticket. It doesn’t remove the need for human intervention entirely though.

For a utility company, this might mean collecting the correct meter readings or ensuring the query relates to the property on file before an agent spends time solving the customer problem. For an e-commerce company, it could mean confirming the order number or for a SaaS company it might mean changing the bundle or package before a human initiates payment collection.

Secondary benefits include the ability to offer live chat 24/7 and consistency in responses which can be difficult to scale as support teams grow with agents having different writing styles and training. ChatGPT will likely be better than a disempowered or unmotivated support team agent, but won’t be better than someone who is well trained with a high CSAT.

Configurability is a massive advantage. With the ability to train the AI with tone of voice and how to respond to frequent queries specific to an industry or company, it works much better than chatbots and other technologies that have been released over the last decade that have either been too generic or too time consuming to set up properly.

How it’s working in practice and what we’ve seen from customers using it

Early experiments are looking promising, but the common fear across most leaders is surrounding chaos. Team members and managers are lacking certainty around how far they can trust it to use sound judgement and not say something that erodes trust or competency that brands have with their customers. If a customer says something horrible about the brand how will the AI respond?

Most customer service queries are not single threaded but rather involve a conversation with multiple back and forth messages. Rather than reducing the number of messages in a conversation or automating the entire thread, generative AI is able to handle some of the messages directly without involving a human until the end when the problem is ready to be solved.

We’re seeing that support teams can’t set up ChatGPT on their own. Many that are investing in rolling it out properly are needing the help of data scientists to analyse inflow patterns and what the right responses look like from top performing agents. This means that there is more relevant data to train the AI with, to ensure consistent and correct responses.

The risky future of over-implementation and our predictions for what’s coming next

ChatGPT and generative AI is going to be most powerful with asynchronous communication such as chat and e-mail. With customer nervousness surrounding dealing with automation and not getting a resolution fast enough or accurately enough, we expect that phone is once again going to become a more popular channel.

Customers will gravitate towards using communication means where they are guaranteed to speak to humans that they have trust and confidence in. This runs the risk of eroding the cost saving and productivity gains from implementing automated responses since synchronous channels are notably costlier to run and less efficient.

Related to the risk of customers fearing dealing with AI, we expect that offering “human only support” is going to become a differentiator some customer experience teams start offering to stand out. In competitive spaces where there is complexity associated with customer queries, there’s nothing more off-putting than a customer receiving a generic or standardised response. Instead of fully embracing and investing in improving AI, companies may move away from it entirely using “lack of AI” as a competitive advantage.

We also see a risk that with the quick deployment and improvements companies are seeing from launching ChatGPT right now, they might hastily shrink their support teams or close down call centres only to have to rapidly re-open or grow them again in a few months.

The behaviour of AI still hasn’t been fully tested, nor do we know enough about how customers respond to interacting with AI.

Time will tell but with the zeitgeist of quick deployment of ChatGPT across all possible use cases in addition to cost cutting pressure, it may lead to an overreaction that will need to be reversed quickly to avoid tarnishing brand value and customer experiences.

Sign up for our newsletter to see more content like this in your inbox.