LLM-based chatbots and how to make them more reliable

LLM-based chatbots and how to make them more reliable

LLM-based chatbots and how to make them more reliable

Written by

ChatGPT and its siblings are all the rage in customer service chatbots. This is fascinating and terrifying. How do we take the terror out of the equation?

In the past year or so we witnessed an explosion of chatbots based on Large Language Models (LLM). The adoption of LLM technology in conversational AI is truly revolutionizing the field, with a user experience that is better by leaps and bounds than what was coming before. They hold immense promise for applications as varied as customer service, simultaneous translation, general information delivery… anything that has to do with connecting the public with data through natural language, either text- or voice-based.

But not all is bright and beautiful in AI land. There are also significant challenges. In this article I give my take on some of the most common challenges and suggest possible ways to overcome them.

Large Language Models predictability

LLMs are an enormous collection of information fragments, that the AI algorithm connects in a statistical way. In order to provide answers, the algorithm takes the most probable path connecting a certain fragment to another, starting from the question, and considering the question’s context: who is asking it, what the setting for the question is… This process does not always lead to a predictable outcome, as it always happens when statistics is involved.

For some use cases, this would not be a be an unsurmountable issue if answers from LLM chatbots vary to some degree. A conversation summary for instance can be rendered in many different ways, all pretty accurate. Or a product can be recommended by an LLM based algorithm with different words and sentences.

But there are many other applications, more related to core customer service capabilities, where precision is paramount and slip-off could be costly: anytime there are legal implications for instance, or when the chatbot is used as an initial screen for someone to get a loan. In these cases, while the potential of LLM-based algorithms is clear, the risk must be mitigated.

Large Language Models sensitivity to input changes

Language is fluid, and there are typically many ways to say the same thing. English, speakers, like all humans, rely on turn of phrases, metaphors, and synonyms to express themselves. Not everyone will use the same words to ask for the same thing, also depending on the speaker’s education, frame of mind, age, location… As George Bernard Shaw said: “England and America are two countries separated by a common language” – well, even if the language is nominally the same, if the listener is an LLM-based chatbot, the same request could take vastly different meanings depending on the words used.

As an extreme example, suppose that one speaker says: “What would it take to cover all the bases and hit it out of the ballpark?”

Another speaker says: “How can we eliminate risk and be very successful?”

In American English, a human would understand these two sentences to say the same thing. But while speaking to a chatbot the result can be very different, depending on if the chatbot catches on the metaphors.

A solution: sanitizing the input

Sending to an LLM-based chatbot always the same words for each question would solve quite a bit of the precision problem. This is impossible to do in a free domain, where users can ask virtually any question. But it is possible, even relatively easy, in a well-defined domain like a customer service one. What we propose is to front the LLM chatbot with a call steering system, which uses natural language to determine the user’s intent, possibly with a dialog made up of several exchanges. Once it determines the intent, the call steering system send the chatbot always the same worded question for that particular intent, which will have been vetted and tested to produce the best result.

Interactive Media has a long experience with conversational applications and a standard structure and process to create applications that perform call steering. So, we have created an all-in-one platform to help users interact with LLM chatbots in a customer service environment. We integrate PhoneMyBot, Interactive Media’s service to provide the voice channel to any chatbot with our call steering platform, MIND, and the LLM chatbot that is actually doing the heavy customer service part. When users call in they reach MIND, which asks them questions about what they need, possibly in more than one exchange, and classifies their answers to one of the intents available in the domain. Then it sends back the standard question for the intent, which PhoneMyBot forwards to the chatbot, receiving the answer and relaying it to the user.

This technique increases the quality and precision of LLM chatbots, making them more suitable for work in a customer service environment.

We would love to put more meat on the bone and talk to you about this solution: please contact Interactive Media at info@imnet.com or click the button below.

Other Articles

Other Articles

LLM-based chatbots and how to make them more reliable

LLM-based chatbots and how to make them more reliable

ChatGPT and its siblings are all the rage in customer service chatbots. This is fascinating and terrifying. How do we take the terror out of the equation?In the past year or so we witnessed an explosion of chatbots based on Large Language Models (LLM). The adoption of...

read more
Multimodal interactions: are they breaking through?

Multimodal interactions: are they breaking through?

Last week I watched a webinar and demo by a company providing tools and solutions for conversational customer service. Interactive Media, where I work, is in the same sector and I wanted to scoop out a competitor, see what they have and how they are presenting their...

read more
PhoneMyBot and ChatGPT: giving voice to AI

PhoneMyBot and ChatGPT: giving voice to AI

Talking with ChatGPT over the phone is cool. But we can also make it useful. In the past few months tech people worldwide have been talking almost exclusively about Open AI’s ChatGPT. It’s the first large language model chatbot to make a splash, and what a splash! It...

read more

Interact with us

Subscription

Receive our exclusive content:

Multimodal interactions: are they breaking through?

Multimodal interactions: are they breaking through?

Multimodal interactions: are they breaking through?

Written by

Last week I watched a webinar and demo by a company providing tools and solutions for conversational customer service. Interactive Media, where I work, is in the same sector and I wanted to scoop out a competitor, see what they have and how they are presenting their solutions to the market. Everyone does this of course; I don’t feel bad about it in the slightest.

This company was presenting with great emphasis a solution that allows a caller to synchronize a voice call (on a smartphone) to a visual IVR component. In essence, when users call, they are offered the option to receive a text message that contains a link to a personalized web application. The web app provides information about what the call is about, and you can navigate it by clicking on the pages, or by voice.

In the demo, this provides a great experience: all the information regarding the case is at the user’s fingertips, and it’s much easier to insert additional data. For instance, think about how hard it is to dictate an email address to an agent (let alone a virtual assistant!). With this type of visual IVR, the user can simply type it into a box, a much more efficient and error-free process.

This particular solution is not simple: you have to use conversational AI to understand what the caller says, being able to identify the intent and navigate precisely by voice, populate the web pages to service the intent on the fly, create and send the link by text message, and, most difficult of all, synchronize the voice and web parts of the session. Well done!

But seeing this demo left me rather surprised: you see, I was doing exactly the same demo with Interactive Media software 5 years ago (and I have the videos to prove it). This made me realize two things. One is that the Interactive Media people and technology are kick-ass, well ahead of most competition. But the other, considering that I was not able to sell this solution, is that sometimes focusing on increasingly sophisticated, “frictionless” services does not pay.

That demo is fantastic, but how many similar applications have you seen in real life? And, based on your real-life experience, how often would you need something similar? In essence, it seems to me that we as an industry are targeting increasingly complex software solutions to an ever-decreasing number of users.

The vast majority of users hope to never have to contact customer service. But when they do it is often for a simple question, one that does not in general need this type of infrastructure. Normally, users can search the company web site, chat with an agent or a chatbot, call in. A good percentage of people calling in does so because they are not comfortable with other channels, either because they are on the move and voice is the best way to interact, or because not everyone is familiar with web technology. Again, as an industry, we tech people tend to project our own experience onto everyone; folks, this is not the entire world!

For people who call in only with voice, there is PhoneMyBot, the Interactive Media service to provide voice channels to chatbots with a no-code, ready to roll approach. Companies that have deployed chatbots but have no conversational AI on the voice channels can use PhoneMyBot to enable telephone conversations with their existing self-service app. Conversational AI vendors who only support textual and web channel can use PhoneMyBot to offer voice channels to their customers. PhoneMyBot targets simpler self-service voice solutions for the vast majority of users.

But if you really need a synchronized voice and Visual IVR application for flashy service to your most tech-savvy customers, why don’t you also call Interactive Media? After all, we have a 5-year advantage.

Please go ahead and try PhoneMyBot for free: contact Interactive Media at info@imnet.com or click the button below.

Other Articles

Other Articles

LLM-based chatbots and how to make them more reliable

LLM-based chatbots and how to make them more reliable

ChatGPT and its siblings are all the rage in customer service chatbots. This is fascinating and terrifying. How do we take the terror out of the equation?In the past year or so we witnessed an explosion of chatbots based on Large Language Models (LLM). The adoption of...

read more
Multimodal interactions: are they breaking through?

Multimodal interactions: are they breaking through?

Last week I watched a webinar and demo by a company providing tools and solutions for conversational customer service. Interactive Media, where I work, is in the same sector and I wanted to scoop out a competitor, see what they have and how they are presenting their...

read more
PhoneMyBot and ChatGPT: giving voice to AI

PhoneMyBot and ChatGPT: giving voice to AI

Talking with ChatGPT over the phone is cool. But we can also make it useful. In the past few months tech people worldwide have been talking almost exclusively about Open AI’s ChatGPT. It’s the first large language model chatbot to make a splash, and what a splash! It...

read more

Interact with us

Subscription

Receive our exclusive content:

PhoneMyBot and ChatGPT: giving voice to AI

PhoneMyBot and ChatGPT: giving voice to AI

PhoneMyBot and ChatGPT: giving voice to AI

Written by

Talking with ChatGPT over the phone is cool. But we can also make it useful.

In the past few months tech people worldwide have been talking almost exclusively about Open AI’s ChatGPT. It’s the first large language model chatbot to make a splash, and what a splash! It landed with the energy of the Chicxulub asteroid – the one that killed the dinosaurs 65 million-odd years ago in the Yucatan sea. That asteroid generated a mile-high tsunami, almost as high as ChatGPT’s. One should go slow with the metaphors though: are ChatGPT and its peers going t kill …[gasp]… us? As an incorrigible optimist and I don’t think so, but the jury is out according to much of the press.

But as we all know, even if a device could cause the end of the world as we know it, if it’s new and shiny people will use it. So, after using ChatGPT to write poems about pickleball or essays on Tibetan literature, the tech community is trying to understand what it can do for real business.

For instance, we at Interactive Media have integrated ChatGPT with PhoneMyBot, our service to provide voice channels to chatbots with a no-code, ready to roll approach. Through PhoneMyBot it is now possible to make a phone call to ChatGPT, ask questions and listen to its answers. This is still only a demo, but in the process we have developed some useful ideas on whether and how ChatGPT may work for what we do normally, which is providing tools to companies to service their customers.

Let’s say it immediately: without personalization, ChatGPT is not sufficient to implement a customer service voicebot. The domain is too wide: it is literally the whole Internet. This means that ChatGPT cannot use its normal language model to answer pointed questions on – say – your bank balance today.

To be sure, in customer service there is sometimes a need for general-purpose conversation. In our experience, users sometimes go out on a tangent and ask bots all sorts of questions. For instance: where do you live? how old are you? can I see you? how much are you paid?… ChatGPT certainly has good answers for all these questions, and it would be useful in side conversations. ChatGPT is also language-independent: in essence it can tell what language a user is speaking and answer in the same language. This is a stunning capability and it makes it so much easier to use ChatGPT.

However, it is possible to “fine-tune” ChatGPT for specific domains, adding dozens, hundreds or thousands of examples of specialized prompt-completion pairs that define a separate domain, identified by its own name and id. This domain goes to augment the general-purpose model and allows the chatbot to answer pointed questions. At Interactive Media, we are experimenting with fine-tuning one of the available general purposes models and we can certify that it works: if ChatGPT has the necessary information, not only it answers to precise questions about the specific domain effectively, but also in a pleasant and precise way.

Often, however, customers’ questions may be ambiguous and hard to characterize. In this case ChatGPT can’t be allowed to answer immediately as what it says would be vague or incorrect. But Interactive Media’s conversational AI platform, MIND, placed in front of ChatGPT, can easily be configured to deal with these cases. MIND can identify the real intent of the caller through an initial dialog, and only after that forward the “real” question to ChatGPT. This makes a huge difference in the conversation outcome.

There’s another snag though, because to be useful you also need to access actual data related to the people’s requests. ChatGPT of course does not perform the appropriate database queries into company databases or CRMs. To counter this at Interactive Media we have developed a generalized method to access database data and to insert this data into ChatGPT’s answers. Of course, depending on the data this has to be done on a case-by=case basis, so call us to discuss!

Please go ahead and try PhoneMyBot’s connection with ChatGPT: contact Interactive Media at info@imnet.com or click the button below.

Other Articles

Other Articles

LLM-based chatbots and how to make them more reliable

LLM-based chatbots and how to make them more reliable

ChatGPT and its siblings are all the rage in customer service chatbots. This is fascinating and terrifying. How do we take the terror out of the equation?In the past year or so we witnessed an explosion of chatbots based on Large Language Models (LLM). The adoption of...

read more
Multimodal interactions: are they breaking through?

Multimodal interactions: are they breaking through?

Last week I watched a webinar and demo by a company providing tools and solutions for conversational customer service. Interactive Media, where I work, is in the same sector and I wanted to scoop out a competitor, see what they have and how they are presenting their...

read more
PhoneMyBot and ChatGPT: giving voice to AI

PhoneMyBot and ChatGPT: giving voice to AI

Talking with ChatGPT over the phone is cool. But we can also make it useful. In the past few months tech people worldwide have been talking almost exclusively about Open AI’s ChatGPT. It’s the first large language model chatbot to make a splash, and what a splash! It...

read more

Interact with us

Subscription

Receive our exclusive content:

The future of intelligent voice

The future of intelligent voice

The future of intelligent voice

Written by

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do?

Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first Echo product. Soon Google and Apple followed suit, with their Google Assistant and Siri technologies.

While Siri is embedded into Apple hardware as a software feature, both Amazon and Google produced and actively started selling the hardware to support their speech software: a line of smart speakers with sensitive microphones that listen for people uttering a key phrase to start detecting what they say. The rise of these devices has been meteoric. They were cheap, convenient, and they largely supplanted both radio and stereo systems in the home, by streaming content controlled by voice. They were sold by the tens of millions, both in the US and around the world: according to a Comscore report, in 2021 almost half of the US internet users owned at least one of them.

Most people in the US are familiar with Alexa: she listens to the sounds around her and when she hears her name she springs into action. This means recording the sentence that comes after the keyword and sending the audio to the Amazon Cloud for recognition, receiving the answer and playing it back. (Supposedly, nothing is recorded outside of the keyword-initiated transaction of course). The same is true for the Google version; hey Google is both longer and less personal.

As an aside, I know someone who’s name is Alexa – and it was her name well before Amazon released the first Echo: I wonder how she feels being called upon doing the biddings of countless people…

The problem with the status quo: lack of revenues

As it often happens in the tech industry, for smart speakers the technology leapt ahead of the profitable use cases. Yes, people were and are using their smart speakers often, but mostly to ask general questions, check on the weather and ask for music streaming. The vendors figured that, with time and as adoption increased, they could come up with a revenue model that would support the business, but so far no-one has managed it.

Of course, there are ads within music streaming if the owner does not subscribe to a music service, but few and far between not to degrade the experience too much. And a $10 a month music subscription is not a panacea to support providing and maintaining the infrastructure for the rest of the service.

The most profitable use case that was hoped for at the beginning, shopping by voice, never took off: people are understandably weary of providing personal information, credit card numbers, etc. to the Cloud through yet another channel, and by definition any shopping done through a smart speaker is “sight unseen”.

So, in the past few months with the changing economy and the realization of how difficult it is to really monetize smart speakers, there has been a definite retrenching by both Amazon and Google. Amazon laid off a good portion of the Alexa development team, Google reportedly greatly reduced funding for the Assistant line and – this is very recent news – Alphabet is laying off as much as 12000 workers in January 2023. One can imagine that the worst-performing divisions would be most affected.

Smart speakers are in trouble.

Voice apps on smart speakers

However, many companies and organizations developed apps to integrate with Alexa and Google Assistant, through the respective APIs. In this case, the smart speakers act simply as a speech transcription and rendering interface: once the app is active, they transcribe what the user says and send the text to the external service, take the text that service sends back and render it into voice for the user to hear.

Amazon calls these apps Skills; Google calls them Actions. Either way, there are hundred of thousands of them. They can be launched with a special prompt: “Alexa, open [skill name]” or “Hey Google, talk to [action name]”. While many apps have not been successful and have minimal use from this channel, others are important or even essential.

What happens to these apps if the smart speaker vendors limit and then terminate their offer? Some are merely activating an additional channel to a wider service, and presumably would not be impacted too severely. But others were developed specifically to take advantage of the voice channel offered for free by smart speakers. For instance, I recently talked with the developer of a skill for blind people, who use their voice to access information that others get from screens. 

Skills and Actions developers are seriously worried.

 On the other hand, what other conduits are there for two-way, intelligent voice applications in the house? Well, the one we’ve always had: the telephone (no matter if fixed or mobile). Granted, calling an app over the phone is a little more complex that simply saying “Hey Google”, but everyone knows how to use a phone and the technology could not be more tried-and-true. The problem then is connecting existing intelligent applications to the telephone network.

PhoneMyBot as the conduit for voice apps

Interactive Media offers PhoneMyBot, a service born to expand the channels available to chatbots to include voice channels. It performs the same functions that are done by intelligent speakers for their apps, transcribing the users’ speech and sending it to the connected application. Then it receives text in return and transforms it into speech, piping it into the voice network. PhoneMyBot is natively integrated into the telephone network and exposes to apps an API equivalent to the ones from Alexa and Google Assistant. In addition, PhoneMyBot integrates with a number of contact center suites to transfer the call to a human agent if necessary.

What makes PhoneMyBot appealing to small organizations that may become stranded if intelligent speakers decline too much? It’s extremely easy to try: an initial trial period is free, and commercial traffic is billed at a (low) per-minute rate independently from the traffic volume. This makes it ideal for low-budget, pay-as-you-go services. The administration is simple and powerful: a single portal provides access to all the traffic data and stats. And its robust, with an infrastructure built on telco-grade software, managing millions of calls per month.

Go ahead, try it! Click the button below.

Other Articles

Other Articles

LLM-based chatbots and how to make them more reliable

LLM-based chatbots and how to make them more reliable

ChatGPT and its siblings are all the rage in customer service chatbots. This is fascinating and terrifying. How do we take the terror out of the equation?In the past year or so we witnessed an explosion of chatbots based on Large Language Models (LLM). The adoption of...

read more
Multimodal interactions: are they breaking through?

Multimodal interactions: are they breaking through?

Last week I watched a webinar and demo by a company providing tools and solutions for conversational customer service. Interactive Media, where I work, is in the same sector and I wanted to scoop out a competitor, see what they have and how they are presenting their...

read more
PhoneMyBot and ChatGPT: giving voice to AI

PhoneMyBot and ChatGPT: giving voice to AI

Talking with ChatGPT over the phone is cool. But we can also make it useful. In the past few months tech people worldwide have been talking almost exclusively about Open AI’s ChatGPT. It’s the first large language model chatbot to make a splash, and what a splash! It...

read more

Interact with us

Subscription

Receive our exclusive content:

My view of Text-to-Speech (TTS) technology evolution in 3 fundamental steps

My view of Text-to-Speech (TTS) technology evolution in 3 fundamental steps

My view of Text-to-Speech (TTS) technology evolution in 3 fundamental steps

Written by

The Author co-founded Interactive Media in 1996 and is the CEO of the company. Interactive Media is a global developer and vendor of speech applications.

Interactive Media has a long history of developing progressively more sophisticated speech applications, with more than 25 years of experience in managing text to speech. But I started working on this even before and I can say that I have been involved in CTI (computer-telephony integration) since its beginning. I want to give a brief perspective of my first-hand experience here.

In 1993 and in the following years I had the privilege to collaborate with the CSELT, the pre-eminent Italian telecommunications lab in Torino. CSELT had then already been working on Text-to-Speech technologies for decades.

Back in the 70s CSELT was, together with AT&T, the only company working on developing TTS for commercial use. Their first publicly demonstrated system was called MUSA. You can hear it speak in this video (in Italian): https://www.youtube.com/watch?v=TvKChDE-Lnk.

In 1993 CSELT released Eloquens, also based on diphones concatenation (diphones are the sounds that we make from the half of a phoneme to half of the next phoneme when we speak a word). Eloquens’ quality was much better than MUSA, and even now it can be considered a good quality product. It is still in use for several applications. See for instance https://www.youtube.com/watch?v=sZuV1L7cqro.

Other Articles

Record with the nursery rhyme Fra Martino campanaro (Brother John), as sung by MUSA in 1978

Eloquens software had been developed to be used on a stand-alone PC. But CSELT, that was owned by the national telephone company, naturally had the goal to use it on the telephone network. This is where I came in. At that time, I was a consultant for an Italian company that had the exclusive sale rights for Italy of the computer boards made by Natural Microsystems, and American company. These were among the first CTI boards which allowed a PC to communicate with the telephone network.

My role was to adapt the Eloquens software to run with the board’s DSPs, so that it could be used in IVR-type applications. I remember these days as an extraordinary period. Aside from the project, which was very interesting, I was a young engineer just out of university and spending a long period of time away from home for the first time. Torino was at that time a heavily industrial city and at 8:30 pm all restaurants were empty, and no-one was in the streets. The following day the factory sirens would go off before dawn to mark the start of a new working day. This was quite different from my hometown, Roma. I was working with Marcello Balestri and Luciano Nebbia’s group: they were excellent engineers, like most of the staff at CSELT. Together we were then able to develop and release the first Italian version, and one of the first in the world, of a commercial TTS that could be used in an IVR system.

Even today, after 30 years, that software is still deployed in some companies. This is also because only in the past few years there has been a substantial technological leap with noticeably better performance, thanks to the use of neural networks and in particular deep learning techniques. Training neural networks to perform TTS, the process does not rely on diphones concatenation and so it avoids the “pixelation” that is still present in older systems. Using deep learning the prosody is practically perfect, and people can sometime not tell a synthetic voice apart from an original human speaker.

One interesting capability of this technology is the possibility to create one’s own synthetic voice, by recording a few hours of audio, for instance by reading a text. Among the most otherworldly applications is the use of a synthetic voice to create a digital persona for a person, even after that person has passed away.

To speak of more worldly affairs, recently Interactive Media won a contract to produce all the audio responses in TIM Brazil’s customer service systems, using a Neural TTS from Microsoft. The resulting quality is amazing, and the caller has the feeling that the speaker is a person: polite, sympathetic and helpful, while still professional sounding. We at Interactive Media are ready to expand on this experience, with the know-how that we accumulated in 25 years, on all other markets. Please contact us if the voice that you use to talk with your customers is important to you.

Other Articles

LLM-based chatbots and how to make them more reliable

LLM-based chatbots and how to make them more reliable

ChatGPT and its siblings are all the rage in customer service chatbots. This is fascinating and terrifying. How do we take the terror out of the equation?In the past year or so we witnessed an explosion of chatbots based on Large Language Models (LLM). The adoption of...

read more
Multimodal interactions: are they breaking through?

Multimodal interactions: are they breaking through?

Last week I watched a webinar and demo by a company providing tools and solutions for conversational customer service. Interactive Media, where I work, is in the same sector and I wanted to scoop out a competitor, see what they have and how they are presenting their...

read more
PhoneMyBot and ChatGPT: giving voice to AI

PhoneMyBot and ChatGPT: giving voice to AI

Talking with ChatGPT over the phone is cool. But we can also make it useful. In the past few months tech people worldwide have been talking almost exclusively about Open AI’s ChatGPT. It’s the first large language model chatbot to make a splash, and what a splash! It...

read more

Interact with us

Subscription

Receive our exclusive content:

Boosting the development of voice-enabled virtual assistants

Boosting the development of voice-enabled virtual assistants

Boosting the development of voice-enabled virtual assistants

Written by

PhoneMyBot by Interactive Media is a service that transforms chatbots, that work only on text conversations, into voice-enabled virtual assistants. To do this, PhoneMyBot terminates the voice channel – be it a telephone line, a recorded voice message, or other streaming voice channels, transforms the voice into text through a speech-to-text service, and sends the text over to the chatbot.

When PhoneMyBot receives the answer as a text message from the chatbot, it renders it into speech and pipes it back to the user. You can learn more about PhoneMyBot here.

There are many nuances and details that are missing from the description above (some of them are patent-pending), but a key to PhoneMyBot’s success is the ability to integrate with many chatbot platforms. PhoneMyBot offers a standard cloud API that chatbots can use, but it also includes adaptors that use the chatbot platforms’ native API, simulating a simple web client. This way, PhoneMyBot can communicate with existing chatbot deployments without the need for new developments in the chatbot code. At the moment, PhoneMyBot deploys adaptors for about 10 chatbot platforms, but new ones are coming out all the time, depending on our customers’ needs. If you don’t see an adaptor for your platform, let us know and we can add it.

Other Articles

This service was designed to make it cheap and immediate to add voice to an existing chatbot deployment – and it does that, but as an interesting side effect it also lowers the cost of new voicebot developments, while speeding up their deployment time.

Why is that? It all comes down to the dynamics of the conversational AI market for enterprise customers.

A successful conversational AI project entails more than just software and communications. It needs to be tailored to the company’s workflow, products and services, and lingo. Often, the type of language that needs to be used is not the same as in a general-purpose conversation, and this requires conversational applications to be trained to better support it. Of course, this is a common requirement in this type of project, and conversational AI platforms support language customization. But it still means that project development, testing, refining, and deployment take substantial time and effort.

Now, there are only so many conversational AI vendors offering voice integration, and system integrators who can use their platform to implement projects. In addition to the conversational AI part, a voice-enabled project includes integration with the telephone network or the corporate PBX, insertion into the IVR flow, and integration with the voice path in the contact center – both to forward calls if the virtual assistant cannot service them completely, and to provide call-associated data to human agents to make their work easier and provide better service.

All this requires specialized expertise, which few vendors have. These companies and people are in high demand, so delays can be long and costs high. 

But PhoneMyBot provides a ready alternative, with its pre-integrated voice channels. It includes telephone network and WhatsApp connectivity, and APIs to transfer calls to other voice endpoints (for instance, a contact center queue). Interactive Media has tons of experience integrating with the most common contact center suites both to insert the virtual assistant into the IVR flow and to send data attached to calls to the human agent who is servicing it.

This means that the pool of vendors that can bid on a voice-enabled conversational AI project is suddenly much bigger. Even companies with little or no voice expertise can now deliver a high-quality omnichannel virtual assistant: they only need to test their PhoneMyBot integration and iron out any small wrinkle that the additional channel may create in their conversational application strategy.

There are many more text-only conversational AI offers than voice-enabled ones. PhoneMyBot opens the omnichannel market to them, which benefits vendors, their customers, and ultimately the customer experience that you and I receive when we call a customer service line.

Other Articles

LLM-based chatbots and how to make them more reliable

LLM-based chatbots and how to make them more reliable

ChatGPT and its siblings are all the rage in customer service chatbots. This is fascinating and terrifying. How do we take the terror out of the equation?In the past year or so we witnessed an explosion of chatbots based on Large Language Models (LLM). The adoption of...

read more
Multimodal interactions: are they breaking through?

Multimodal interactions: are they breaking through?

Last week I watched a webinar and demo by a company providing tools and solutions for conversational customer service. Interactive Media, where I work, is in the same sector and I wanted to scoop out a competitor, see what they have and how they are presenting their...

read more
PhoneMyBot and ChatGPT: giving voice to AI

PhoneMyBot and ChatGPT: giving voice to AI

Talking with ChatGPT over the phone is cool. But we can also make it useful. In the past few months tech people worldwide have been talking almost exclusively about Open AI’s ChatGPT. It’s the first large language model chatbot to make a splash, and what a splash! It...

read more

Interact with us

Subscription

Receive our exclusive content: