LLM-based chatbots and how to make them more reliable

Written by Livio Pugliese

November 24, 2023

ChatGPT and its siblings are all the rage in customer service chatbots. This is fascinating and terrifying. How do we take the terror out of the equation?

In the past year or so we witnessed an explosion of chatbots based on Large Language Models (LLM). The adoption of LLM technology in conversational AI is truly revolutionizing the field, with a user experience that is better by leaps and bounds than what was coming before. They hold immense promise for applications as varied as customer service, simultaneous translation, general information delivery… anything that has to do with connecting the public with data through natural language, either text- or voice-based.

But not all is bright and beautiful in AI land. There are also significant challenges. In this article I give my take on some of the most common challenges and suggest possible ways to overcome them.

Large Language Models predictability

LLMs are an enormous collection of information fragments, that the AI algorithm connects in a statistical way. In order to provide answers, the algorithm takes the most probable path connecting a certain fragment to another, starting from the question, and considering the question’s context: who is asking it, what the setting for the question is… This process does not always lead to a predictable outcome, as it always happens when statistics is involved.

For some use cases, this would not be a be an unsurmountable issue if answers from LLM chatbots vary to some degree. A conversation summary for instance can be rendered in many different ways, all pretty accurate. Or a product can be recommended by an LLM based algorithm with different words and sentences.

But there are many other applications, more related to core customer service capabilities, where precision is paramount and slip-off could be costly: anytime there are legal implications for instance, or when the chatbot is used as an initial screen for someone to get a loan. In these cases, while the potential of LLM-based algorithms is clear, the risk must be mitigated.

Large Language Models sensitivity to input changes

Language is fluid, and there are typically many ways to say the same thing. English, speakers, like all humans, rely on turn of phrases, metaphors, and synonyms to express themselves. Not everyone will use the same words to ask for the same thing, also depending on the speaker’s education, frame of mind, age, location… As George Bernard Shaw said: “England and America are two countries separated by a common language” – well, even if the language is nominally the same, if the listener is an LLM-based chatbot, the same request could take vastly different meanings depending on the words used.

As an extreme example, suppose that one speaker says: “What would it take to cover all the bases and hit it out of the ballpark?”

Another speaker says: “How can we eliminate risk and be very successful?”

In American English, a human would understand these two sentences to say the same thing. But while speaking to a chatbot the result can be very different, depending on if the chatbot catches on the metaphors.

A solution: sanitizing the input

Sending to an LLM-based chatbot always the same words for each question would solve quite a bit of the precision problem. This is impossible to do in a free domain, where users can ask virtually any question. But it is possible, even relatively easy, in a well-defined domain like a customer service one. What we propose is to front the LLM chatbot with a call steering system, which uses natural language to determine the user’s intent, possibly with a dialog made up of several exchanges. Once it determines the intent, the call steering system send the chatbot always the same worded question for that particular intent, which will have been vetted and tested to produce the best result.

Interactive Media has a long experience with conversational applications and a standard structure and process to create applications that perform call steering. So, we have created an all-in-one platform to help users interact with LLM chatbots in a customer service environment. We integrate PhoneMyBot, Interactive Media’s service to provide the voice channel to any chatbot with our call steering platform, MIND, and the LLM chatbot that is actually doing the heavy customer service part. When users call in they reach MIND, which asks them questions about what they need, possibly in more than one exchange, and classifies their answers to one of the intents available in the domain. Then it sends back the standard question for the intent, which PhoneMyBot forwards to the chatbot, receiving the answer and relaying it to the user.

This technique increases the quality and precision of LLM chatbots, making them more suitable for work in a customer service environment.

We would love to put more meat on the bone and talk to you about this solution: please contact Interactive Media at info@imnet.com or click the button below.

Other Articles

Other Articles

The future of intelligent voice

The future of intelligent voice

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do? Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first...

read more
Boosting the development of voice-enabled virtual assistants

Boosting the development of voice-enabled virtual assistants

PhoneMyBot by Interactive Media is a service that transforms chatbots, that work only on text conversations, into voice-enabled virtual assistants. To do this, PhoneMyBot terminates the voice channel – be it a telephone line, a recorded voice message, or other...

read more

Interact with us


Receive our exclusive content: