Speech-to-Text results optimization with Interactive Media’s solutions

Speech-to-Text results optimization with Interactive Media’s solutions

Speech-to-Text results optimization with Interactive Media’s solutions

An historical perspective

Interactive Media has offered Conversational AI solutions for many years, focusing on voice-enabled Virtual Agents. We deployed our first conversational Virtual Agents way before Conversational AI was a buzz-word and the explosion of self-service conversational deployments. 

Having focused on voice since the beginning, we are keenly aware of the challenges that come with converting the spoken utterances coming from users into text that conversational systems can use.

This is because conversational AI Virtual Agents can hold a spoken conversation, for instance on the phone, but their AI brain works on text. So, they need to convert the sentences spoken by humans into their text counterpart, and the text that the system uses to answer back into speech.

Ten years ago, the options available on the market to interpret speech and convert it into text (ASR, Automatic Speech Recognition, or Speech-to-Text) were limited. One company, Nuance, dominated the field, having developed their own technology, or acquired smaller competitors in different countries to offer Speech-to-Text in different languages. So, initially Interactive Media relied on Nuance’s technology for all its voice-enabled Virtual Agent deployments.

Other Articles

Today’s landscape

The state of the technology is vastly different now. The wide adoption of AI has changed the way human speech is interpreted by machines in a substantial way, making the task to develop Speech-to-Text systems much easier and performance much better – meaning that transcription precision has improved significantly. Speech-to-Text offers have exploded in number and dozens of companies now provide the service, either directly from the public Cloud or integrated more strictly with speech applications.

However, speech is not the same for all people and applications. The variations are staggering. People speak in different ways depending on what they want, what is being asked of them, where they are in a conversation, and of course in dozens of different languages. Providing a Speech-to-Text service that covers effectively all the variations and parts of a conversation is exceedingly hard. So, inevitably some services are better than other for specific tasks and languages.

Interactive Media’s approach to Speech-to-Text

Since Speech-to-Text is still integral to Interactive Media’s offer, we are constantly monitoring its advances and testing different services on a day-to-day basis. We have developed metrics and standardized test suites to inform the decision of what service to use for the benefit of our customers, depending on the use case which dictates the task at hand, the settings, and the language.

What’s the benefit? We have found that the main general-purpose Speech-to-Text services have some weak points, for instance when the task is to fill in a form with numbers or alphanumeric strings. In this case the field of results is limited, but some services don’t seem to use this to their advantage and retain the same percentage of correct recognition as the general speech. But while a 95% recognition accuracy is usually enough to find out an intent (for instance), when you need to take in a string of 10 digits, you’ll get it wrong roughly 40% of the times.

However, other Speech-to-Text engines are optimized for recognizing digits or allow the user to define tight grammars that can help with the task. Using these engines, you can get an accuracy up to 99%, which over 10 digits results in a 90% probability to get the whole string right.

Similarly, there are more common tasks that need optimization for the Virtual Agent to be effective. Maybe the most challenging one is transcribing an email address. Human agents have a hard time with it, and the percentage of errors is exceedingly high. Again, some Speech-to-Text services do better than others and even a 5% difference makes it worth it to switch to a better performing service in mid-call if the volume of traffic is high enough.

So, we engineered our platform to use several of the best Speech-to-Text services, constantly testing the connected services and adding new ones as they become available. It’s a big task, but (we think) we are being fairly smart about it: we model conversations by defining categories of tasks that Virtual Agents must accomplish, and continuously test each of the services we integrate with using sample atomic interactions belonging to each category. This way, we derive scores for the various services for each task, in several languages.

This would be academic without a way for the Virtual Agent application to tell us what to expect. So, we added this feature to all our services, provided by the PhoneMyBot and OMNIA platforms. The API allows to specify the expected category of utterance coming from the user, based on the question being asked. So for instance, if the system prompts the user to provide a numerical code, the service knows that the next utterance is most likely composed of numbers, and will use the Speech-to-Text engine with the best performance recognizing them.​

The difference in performance is substantial – if even 10% less calls have to be forwarded to human agents, especially when the task is simply collecting data from the customer, the customer experience is better and the ROI for our customers soars, which is the promise of Virtual Agents, delivered.

Other Articles

Multimodal interactions: are they breaking through?

Multimodal interactions: are they breaking through?

Last week I watched a webinar and demo by a company providing tools and solutions for conversational customer service. Interactive Media, where I work, is in the same sector and I wanted to scoop out a competitor, see what they have and how they are presenting their...

read more
PhoneMyBot and ChatGPT: giving voice to AI

PhoneMyBot and ChatGPT: giving voice to AI

Talking with ChatGPT over the phone is cool. But we can also make it useful. In the past few months tech people worldwide have been talking almost exclusively about Open AI’s ChatGPT. It’s the first large language model chatbot to make a splash, and what a splash! It...

read more
The future of intelligent voice

The future of intelligent voice

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do? Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first...

read more

Interact with us

Subscription

Receive our exclusive content:

PhoneMyBot outbound service

PhoneMyBot outbound service

PhoneMyBot outbound service

When people think of chatbots, mostly they envision little helpers popping up on the lower right side of webpages. Maybe a bit annoying if you are not looking for anything particular, often helpful, they take away the guesswork of navigating to the right information within the site by interacting with users in natural language. Users write a question, the chatbot interprets its meaning and answers with the information. Or maybe not – depending on how well made the chatbot is.

Chatbots are supplanting the venerable website FAQs section, provide services and answers for the most common needs, even let users perform some self-service tasks like order products or make payments. This way, chatbots improve the customer experience and service most of the interactions in self-service mode, while costing a fraction of live agents, who can concentrate on the interactions that chatbots cannot solve and require creativity and human touch.

But chatbots always need users to come to them and initiate the interaction.

The reason is obvious: how can a website reach out to users who are not “visiting” its pages? True, chatbots also use other channels: messaging services (WhatsApp, Facebook Messenger), text messages, email. These can be used to start conversations and sometimes they are. But it’s not common or immediate: people are not necessarily watching their messaging apps all the time and messages from companies can be ignored easily.

Other Articles

There are good reasons for companies to reach users proactively and immediately: for instance, to remind them of an appointment and give them the ability to reschedule. Or to confirm an order before it ships. Text messages can be used for that but there’s no guarantee that the answer will be fast – or there will be an answer at all. The main way to reach people quickly and with real-time feedback is a phone call: the phone will ring and if the user answers the unfolding conversation allows to go over the matter completely and with a high degree of certainty. So, this is now done with automatic dialers that are backed up by live agents, which is also expensive and not pleasant for the agents themselves. Too bad that chatbots cannot use the phone.

Or can they?

PhoneMyBot by Interactive Media provides services that allow chatbots to seamlessly operate on voice channels, starting with the telephone. It is a Cloud-based environment with connectivity to the telephone network, APIs to connect with the chatbots, and using multiple speech-to-text and text-to-speech services to “translate” between the voice-based and text-based ends.

PhoneMyBot uses a layer of software adaptors to natively talk with several common conversational AI frameworks. Chatbots based on these frameworks don’t have to do anything to interact with voice users: they see the endpoint as just another website-based client. But of course, this is for incoming calls.

But PhoneMyBot also exposes a standard cloud API that the chatbots can use, and it supports placing calls to telephones. Once the call is established, the chatbot interacts with the user like in any other chat conversation, leaving to PhoneMyBot the task of converting between text and voice. If the call cannot be connected, or it goes to voicemail, the chatbot receives a message from PhoneMyBot and can continue to the next call.

The applications are numerous, all resulting in better customer experience and lower costs for the company:

  • Reservations and scheduling
  • Order confirmations, delivery alerts
  • Reminders or appointment confirmation
  • Service renewal
  • Upselling

With its outbound service, PhoneMyBot allows to use chatbots in a completely new way, giving voice to their chat and opening new perspectives. To learn more please visit https://www.phonemybot.com or contact us at info@phonemybot.com.

Other Articles

Multimodal interactions: are they breaking through?

Multimodal interactions: are they breaking through?

Last week I watched a webinar and demo by a company providing tools and solutions for conversational customer service. Interactive Media, where I work, is in the same sector and I wanted to scoop out a competitor, see what they have and how they are presenting their...

read more
PhoneMyBot and ChatGPT: giving voice to AI

PhoneMyBot and ChatGPT: giving voice to AI

Talking with ChatGPT over the phone is cool. But we can also make it useful. In the past few months tech people worldwide have been talking almost exclusively about Open AI’s ChatGPT. It’s the first large language model chatbot to make a splash, and what a splash! It...

read more
The future of intelligent voice

The future of intelligent voice

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do? Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first...

read more

Interact with us

Subscription

Receive our exclusive content:

Interactive Media’s Conversational Virtual Agents and how they interact with people in natural language

Interactive Media’s Conversational Virtual Agents and how they interact with people in natural language

Interactive Media’s Conversational Virtual Agents and how they interact with people in natural language

Natural Language Processing (NLP) is gaining ever more relevance as it applies to virtual agents. The technology is efficient in the challenge of automating interactions with customers, without compromising the quality of service.

In this post we will elaborate about the subject. First, we will address the concept and the importance of virtual agents in corporate service processes. Next, we will detail some of the main challenges and major advantages of NLP when used woth bots.

Finally, we will make the case of why Interactive Media, founded more than 20 years ago, is the best choice in service solutions with NLP support.

Happy reading!

Other Articles

Virtual agents: what are they and why are they important in service?

Increasingly common in corporate daily life, virtual agents are computer applications that use artificial intelligence (AI) and machine learning to optimize service processes. Virtual agents use Natural Language Processing (NLP) and Natural Language Understanding (NLU) as the basis for conducting a conversation with people.

By incorporating virtual agents into workflows, companies have a double gain: while leveraging service team productivity, speeding up problem solving through technology, they also optimize important resources – such as labor, time and money. The result of the equation can be extremely positive and, therefore, very attractive to high performance companies.

NLP, in turn, plays a decisive role in the effectiveness of virtual agents, especially when the user experience is a central theme. “Generally speaking, Natural Language Processing is the ability of a computer system to interact with people using speech, adapting to understand what they say and to respond to them in a natural way,” says Livio Pugliese, CEO of PhoneMyBot, an Interactive Media company.

Technically speaking, NLP is at the intersection of linguistics and information technology, benefiting from the advances of both. According to a Gartner report, it is estimated that by 2021, 15% of all customer service interactions will be fully handled by artificial intelligence mechanisms. In Brazil, the virtual agent market also continues to be heated: in 2019 alone, 60 thousand bots were launched, a number almost 353% higher than the previous year.

Natural language: what are the biggest challenges and main advantages?

The rapid advance of technology, especially of artificial intelligence, has in recent years led to a substantial increase in the quality of comprehension and language generation. As a result, virtual agents also improved and gained more and more space in business departments – from sales to technical support.

In practice, NLP needs written text to function. Therefore, it is necessary that machines accurately transcribe what people say, providing a coherent and precise interpretation – which undoubtedly emerges as one of the main challenges of Natural Language Processing.

“Even more challenging, however, is the mission of assigning a faithful meaning to the transcript, since people use many different phrases to say the same thing and, in some cases, the same word can mean different things in specific contexts”, comments Livio Pugliese.

However, overcoming this obstacle brings a substantial reward: adopting automated service solutions brings many advantages and can represent significant gains in the short, medium and long term.

“When we are talking about voice, NLP can help whenever people need to communicate with machines without using their hands,” explains the CEO of PhoneMyBot. The executive reinforces that these days it is not just questions and answers, but conversations. The biggest advantage lies in the ability to understand the user’s intention to, if possible, provide the services most appropriate to the question or complaint.

“Often, the virtual agent gets only some of the meaning during the first conversation exchange” according to Livio Pugliese. But with AI and machine learning resources, this is no longer a problem: the technology can continue to ask complementary questions to single out the intention until it is completely understood, and the most relevant answer is sent to the customer.

The operational impact of virtual agents is important for the bottom line as well. When bots are in charge of leading the initial interaction with customers, often containing the operation into self-service, professionals in the field can dedicate themselves to more analytical and strategic tasks, which helps the overall company performance.

If you still have doubts about the efficiency of virtual agents in generating savings for corporations, it is worth remembering that a survey by Juniper Research predicts that, by 2022, companies will save 8 billion dollars a year with the application of conversational technologies. In other words: it is worth investing now.

Experience and technology: why is Interactive Media a specialist in NLP?

Interactive Media develops, deploys, and continuously improves conversational virtual customer service agents across multiple channels. With great expertise in artificial intelligence tools and machine learning, the company helps its customers to optimize their interaction flows with users.

With more than 20 years of experience in voice applications, Interactive Media has implemented many successful use cases in organizations of the most diverse sizes and segments. “Based on the history, we know that virtual agents can solve up to 80% of the problems in the call center, freeing human agents from countless telephone contacts”, points out Pugliese.

Interactive Media’s platform uses a carefully tailored approach to build technologies that fosters high-performance understanding, allowing for more accurate interactions – which, in turn, improves both ends of the chain. For the company, it is about maximizing resources and reducing costs; for the user, it means that problem solving is more agile and efficient.

“At Interactive Media, we cover the complete lifecycle of virtual NLP agents and also of all integrations, which ensures that we can provide the most appropriate and intelligent solution to the problem presented by the customer”, concludes the PhoneMyBot CEO .

We want to end with two complementary conclusions. The first is that Natural Language Processing emerges as the most assertive way to enable a more human-like automated service, capable of really understand the user’s intention. The second solidifies Interactive Media’s position as a reference in the area; after all, even before the chatbots boom, the company already offered AI-based conversational services and they have only become smarter and more focused with time.

To improve the service flow in your company you need an effective technological mechanism. Contact us and find out how we can help you implement more complete solutions, in line with the demands of an evolving market.

Other Articles

Multimodal interactions: are they breaking through?

Multimodal interactions: are they breaking through?

Last week I watched a webinar and demo by a company providing tools and solutions for conversational customer service. Interactive Media, where I work, is in the same sector and I wanted to scoop out a competitor, see what they have and how they are presenting their...

read more
PhoneMyBot and ChatGPT: giving voice to AI

PhoneMyBot and ChatGPT: giving voice to AI

Talking with ChatGPT over the phone is cool. But we can also make it useful. In the past few months tech people worldwide have been talking almost exclusively about Open AI’s ChatGPT. It’s the first large language model chatbot to make a splash, and what a splash! It...

read more
The future of intelligent voice

The future of intelligent voice

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do? Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first...

read more

Interact with us

Subscription

Receive our exclusive content:

Chatbots and recorded voice – a messaging era dilemma

Chatbots and recorded voice – a messaging era dilemma

Chatbots and recorded voice – a messaging era dilemma

Chatbots converse with people in natural language and have had an extraordinary proliferation in the past few years. They started as little windows on websites, allowing users to write what they were looking for and providing information directly, instead of forcing people to navigate the complete site looking for their content. 

This is certainly a worthwhile mission, but chatbots have expanded from that, to mining databases and presenting personalized results, and performing mission-critical activities like booking and confirming appointments.

But the past few years have also seen the explosion of mobile messaging services, which are now an integral part of (almost) everyone’s life. From simple one-on-one text messages (SMS) to multimedia messages to multiple recipients and platform that straddle the divide between messaging and social networks, like WhatsApp, Viber, Telegram, Facebook Messenger. The advantages of these services are clear: they are software-only and brought to users on a device that’s always with them, they are free or almost free, they offer multimedia capabilities, and writing texts is faster and more flexible than calling. Even though it is less common in the USA, WhatsApp (owned by Facebook) is currently the biggest mobile messaging app in the world, with about 2 billion users and about 100 billion messages sent per day.

Other Articles

And so, people send and enjoy messages at an ever-increasing rate. Of course, chatbots are also in the mix, following their audience to the channels that they use. This way, people can get services from chatbots on their favorite messaging app, just like they were messaging with friends.

Chatbots work on text and all messaging applications are based on text. They all support pictures and videos, which are transferred as text-based links that the app follows to retrieve the content or attachments. Chatbots can connect to any messaging platform with an API that allows it, simulating a mobile device or implementing a business endpoint.

All good then? Not completely. A functionality offered by some messaging platforms is to record a voice message instead of typing and send it instead of (or together with) a text message. This is becoming more and more common – people on the move may not want to stop and type, while recording a brief message is fast and easy. It is also more personal: you can say a lot more with your tone of voice than sending text and emojis.  Humans also appreciate to hear their friends voice more than just reading what they write.

But not chatbots. For them, a recorded voice message in a text exchange means the end of the conversation: they are not (in general) equipped for receiving a voice file and transcribing it into text to feed to the conversational AI engine that propels the conversation. The alternative, that can be used in high-value conversations like sales or customer support ones, is to transition the interaction to a human agent who will listen to the voice message and reply back, taking over the exchange with the user. But this is expensive as it requires the organization to staff humans in a sufficient number to pick up failed bot conversations in addition to conducting their normal business.

Even worse would be for human agents to simply listen and transcribe the message to pass it back to the chatbot: this would be an impossibly dull and menial job and likely to lead to massive turnover.

What is needed is a service to transcribe voice recordings and get them back to chatbots accurately and quickly.  

PhoneMyBot from Interactive Media provides such a service. PhoneMyBot is dedicated to expanding the chatbots realm to voice, be it from the telephone network or any other channel. For the telephone channel, PhoneMyBot must transform live voice from a user into text and text from the chatbot into voice. All of this, in several languages and with a selection of the best speech-to-text service for the job. This also enables PhoneMyBot to spot-transcribe recorded messages.

A crucial point is to make it very easy for chatbots to submit a recorded voice message to transcribe. PhoneMyBot exposes a RESTful API for this, supporting numerous encodings and formats for the voice file. Considering that most users are on WhatsApp and so chatbots also use this channel, PhoneMyBot also provides a WhatsApp enabled number for access. Chatbots can send a message to PhoneMyBot with the voice file and receive back the transcription as the response.

With this feature, we of PhoneMyBot believe that we gave a definitive answer to the recorded voice messages dilemma.

Other Articles

Multimodal interactions: are they breaking through?

Multimodal interactions: are they breaking through?

Last week I watched a webinar and demo by a company providing tools and solutions for conversational customer service. Interactive Media, where I work, is in the same sector and I wanted to scoop out a competitor, see what they have and how they are presenting their...

read more
PhoneMyBot and ChatGPT: giving voice to AI

PhoneMyBot and ChatGPT: giving voice to AI

Talking with ChatGPT over the phone is cool. But we can also make it useful. In the past few months tech people worldwide have been talking almost exclusively about Open AI’s ChatGPT. It’s the first large language model chatbot to make a splash, and what a splash! It...

read more
The future of intelligent voice

The future of intelligent voice

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do? Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first...

read more

Interact with us

Subscription

Receive our exclusive content:

How a fast automated discovery of user intent helps the whole customer service chain

How a fast automated discovery of user intent helps the whole customer service chain

How a fast automated discovery of user intent helps the whole customer service chain

If you read literature about customer support, especially as it relates with self-service support, you frequently find the expression “user intent”. But what is a user intent? We define it as the objective that a consumer wants to achieve when performing a search on the internet, browsing a website, or contacting a company service department. With the rise of automatic systems to let users self-serve in their relationship with companies, how intents are discovered by the system and managed has become paramount.

There is no doubt that having oneself understood quickly when looking for something is important, but typically there are many ways to express a need, a complaint or a desire and people will use them all (even assuming they know exactly what they are looking for, which is not always the case).

So, one of the biggest challenges for automation services is to map these expressions and identify the real motivation behind them.

Of course, when calling a company, users could be directed to speak with a human agent who will quicky determine the user’s need, but this is expensive and does not scale, so automated systems have been available for decades to “qualify the call”, discovering the user intent, and route the call to the most appropriate service. For many years this meant menu-based systems interacting with users through tones, but more recently Artificial Intelligence systems have become more and more able to converse with people, reducing the steps necessary and delivering a much more pleasant, agile, and accurate customer experience.

So, interest and investments in Conversational AI able to discover the user’s intent are increasing significantly in the various sectors in which it operates: from the development of algorithms and intelligent systems to advertising and Inbound Marketing strategies. But how, in fact, does the intent discovery work, how can this technology contribute to serving your company? This is what we discuss here.

Other Articles

How do intelligent systems identify intents?

When a user reaches an intelligent conversational virtual agent, the experience is very different from an old IVR, although the objective of the first part of the call is the same – identifying the caller’s intent. A conversational AI Virtual Agent will start the conversation with an open question, something like “Good morning, you have reached Company X, how can we help you?”. Users are thus free to express their need in any way they like.

When they speak, their sentence is first transcribed from voice to text by the Virtual Agent, then sent to a conversation engine for analysis. Engines can be of different types, but all of them compare the sentence with a knowledge base of possible requests, related to the capability of the service. Of course, the number of possible intents is not infinite and in fact they are the same that can be served by an older IVR system. So, the “domain” on which the Virtual Agent searches for the intent is limited, and this facilitates the search.

It is possible that the caller has already specified all that the system needs to know to correctly identify the intent, but very often this is not the case. For instance, a system to automatically book an appointment will need to know the name of the user, the type of appointment, the location, the date, and the time. No-one would say all this in the initial sentence. But a well-designed Virtual Agent will be able to narrow down the possibilities gradually to get to completely identify the users’ intent and service their need.

One advantage of Virtual Agents over old IVR systems is that the pieces of information can come on any order. So, continuing with the appointment booking example, the user may say “I would like to reserve an eye doctor appointment”, and the Virtual Agent can then ask them what day they want it, where, and what is the preferred time (assuming that the user’s phone number is already in the database and so the system knows who it is talking with). But the user could also say: “I need an appointment tomorrow”. In this case the Virtual Agent would reply: “what type of appointment? We cover ophthalmology, dermatology and radiology”, and then proceed to collect the rest of the information.

The conversation will continue until the Virtual Agent has collected all the information necessary to provide the service, or if there are complications, at least the Virtual Agent will be able to forward the call to the correct human agents – in this case, the ones serving the Ophthalmology department.

The advantages of an autonomous voice service

The ability of Virtual Agents to identify intents quickly and precisely provide several advantages for companies, especially ones with a high volume of customer interactions and several departments. To start with, it is not only the intent that the Virtual Agent identifies, but also what it is called “entities”: the pieces of information that make providing the service possible. In the example above, the intent is the type of appointment that the user seeks. Entities are instead the date and time, and location. Having the complete set of information often enables the Virtual Agent to complete the service without contacting a human agent, thus saving time and money.

Even when the service is not provided completely by the Virtual Agent, interactions are routed to the correct human agent queue with a much higher precision, greatly reducing the percentage of calls that have to be transferred to another department. This also saves money and time, not to mention providing a better customer experience.

Finally, the ability of Virtual Agent to collect most of not all the information necessary and transfer it to human agents together with the call also helps keeping the duration of calls shorter and save money.

Other advantages include the ability of the service to scale to meet demand, much faster than what a human agents-based contact center can scale. If a service peak is coming, due to the season or a scheduled event, or even in emergencies, it is easy to just increase the number of Virtual Agents that come in perfectly prepared and trained as the ones already in use. This will buffer the traffic increase on the “real” contact center, as a high percentage of the peak calls will be resolved in self-service mode and the peak on human agents will them be smoothed.

Virtual Agents also remove the limitations of day and time, since they work 24 hours a day, 7 days a week. Continuous service prevents demands from being “dammed” at the ends of the week and contributes to customer satisfaction, as most of their needs are met at any time and without waiting on the phone.

Virtual agents acceptance

Virtual Agents of all types are becoming more and more common, and thus accepted by the general public. People are increasingly used to controlling computer services by voice, from smart speakers to search engine searches to interactions with virtual agents over the phone. So, Virtual Agents are accepted by the public immediately, and gladly. A conversational experience not only contribute to a more human service, but also make the consumer’s routine more practical. Many problems can be solved without the customer having to click on a single button.

Understanding of the customer journey

For a company to be able to provide an excellent customer service, it needs to know its customers and their journey, while seeking support – the path taken from the first contact to when their need is met. This understanding allows organizations to deliver exactly what their customers are looking for at each stage of the journey.

Until the recent past, it was not possible to obtain this knowledge with satisfactory precision since the available data was very limited, especially on the telephone channel. The best companies could was to record calls and then select a sample to analyze, which was expensive if they wanted a more complete picture or necessarily incomplete.

A Virtual Agent however works on text, and so all calls are transcribed. This allows to use text analysis tools, also based on Artificial Intelligence, to monitor consumer behavior in detail and gain valuable insights.

About Interactive Media

Founded in Italy over 20 years ago, and with offices in Brazil and the USA, Interactive Media is at the forefront of Conversational AI technology and processes hundreds of millions of customer service conversations a year with its Virtual Agents, in different countries and languages.

Now that you understand the importance of the user’s intent, it is time to see up close how Virtual Agents interpret and use it in service. Get in touch with us and get to know OMNIA, our complete solution for the development, deployment, training, management and monitoring of Omnichannel Virtual Agents.

Other Articles

Multimodal interactions: are they breaking through?

Multimodal interactions: are they breaking through?

Last week I watched a webinar and demo by a company providing tools and solutions for conversational customer service. Interactive Media, where I work, is in the same sector and I wanted to scoop out a competitor, see what they have and how they are presenting their...

read more
PhoneMyBot and ChatGPT: giving voice to AI

PhoneMyBot and ChatGPT: giving voice to AI

Talking with ChatGPT over the phone is cool. But we can also make it useful. In the past few months tech people worldwide have been talking almost exclusively about Open AI’s ChatGPT. It’s the first large language model chatbot to make a splash, and what a splash! It...

read more
The future of intelligent voice

The future of intelligent voice

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do? Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first...

read more

Interact with us

Subscription

Receive our exclusive content: