The future of intelligent voice

The future of intelligent voice

The future of intelligent voice

Written by

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do?

Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first Echo product. Soon Google and Apple followed suit, with their Google Assistant and Siri technologies.

While Siri is embedded into Apple hardware as a software feature, both Amazon and Google produced and actively started selling the hardware to support their speech software: a line of smart speakers with sensitive microphones that listen for people uttering a key phrase to start detecting what they say. The rise of these devices has been meteoric. They were cheap, convenient, and they largely supplanted both radio and stereo systems in the home, by streaming content controlled by voice. They were sold by the tens of millions, both in the US and around the world: according to a Comscore report, in 2021 almost half of the US internet users owned at least one of them.

Most people in the US are familiar with Alexa: she listens to the sounds around her and when she hears her name she springs into action. This means recording the sentence that comes after the keyword and sending the audio to the Amazon Cloud for recognition, receiving the answer and playing it back. (Supposedly, nothing is recorded outside of the keyword-initiated transaction of course). The same is true for the Google version; hey Google is both longer and less personal.

As an aside, I know someone who’s name is Alexa – and it was her name well before Amazon released the first Echo: I wonder how she feels being called upon doing the biddings of countless people…

The problem with the status quo: lack of revenues

As it often happens in the tech industry, for smart speakers the technology leapt ahead of the profitable use cases. Yes, people were and are using their smart speakers often, but mostly to ask general questions, check on the weather and ask for music streaming. The vendors figured that, with time and as adoption increased, they could come up with a revenue model that would support the business, but so far no-one has managed it.

Of course, there are ads within music streaming if the owner does not subscribe to a music service, but few and far between not to degrade the experience too much. And a $10 a month music subscription is not a panacea to support providing and maintaining the infrastructure for the rest of the service.

The most profitable use case that was hoped for at the beginning, shopping by voice, never took off: people are understandably weary of providing personal information, credit card numbers, etc. to the Cloud through yet another channel, and by definition any shopping done through a smart speaker is “sight unseen”.

So, in the past few months with the changing economy and the realization of how difficult it is to really monetize smart speakers, there has been a definite retrenching by both Amazon and Google. Amazon laid off a good portion of the Alexa development team, Google reportedly greatly reduced funding for the Assistant line and – this is very recent news – Alphabet is laying off as much as 12000 workers in January 2023. One can imagine that the worst-performing divisions would be most affected.

Smart speakers are in trouble.

Voice apps on smart speakers

However, many companies and organizations developed apps to integrate with Alexa and Google Assistant, through the respective APIs. In this case, the smart speakers act simply as a speech transcription and rendering interface: once the app is active, they transcribe what the user says and send the text to the external service, take the text that service sends back and render it into voice for the user to hear.

Amazon calls these apps Skills; Google calls them Actions. Either way, there are hundred of thousands of them. They can be launched with a special prompt: “Alexa, open [skill name]” or “Hey Google, talk to [action name]”. While many apps have not been successful and have minimal use from this channel, others are important or even essential.

What happens to these apps if the smart speaker vendors limit and then terminate their offer? Some are merely activating an additional channel to a wider service, and presumably would not be impacted too severely. But others were developed specifically to take advantage of the voice channel offered for free by smart speakers. For instance, I recently talked with the developer of a skill for blind people, who use their voice to access information that others get from screens. 

Skills and Actions developers are seriously worried.

 On the other hand, what other conduits are there for two-way, intelligent voice applications in the house? Well, the one we’ve always had: the telephone (no matter if fixed or mobile). Granted, calling an app over the phone is a little more complex that simply saying “Hey Google”, but everyone knows how to use a phone and the technology could not be more tried-and-true. The problem then is connecting existing intelligent applications to the telephone network.

PhoneMyBot as the conduit for voice apps

Interactive Media offers PhoneMyBot, a service born to expand the channels available to chatbots to include voice channels. It performs the same functions that are done by intelligent speakers for their apps, transcribing the users’ speech and sending it to the connected application. Then it receives text in return and transforms it into speech, piping it into the voice network. PhoneMyBot is natively integrated into the telephone network and exposes to apps an API equivalent to the ones from Alexa and Google Assistant. In addition, PhoneMyBot integrates with a number of contact center suites to transfer the call to a human agent if necessary.

What makes PhoneMyBot appealing to small organizations that may become stranded if intelligent speakers decline too much? It’s extremely easy to try: an initial trial period is free, and commercial traffic is billed at a (low) per-minute rate independently from the traffic volume. This makes it ideal for low-budget, pay-as-you-go services. The administration is simple and powerful: a single portal provides access to all the traffic data and stats. And its robust, with an infrastructure built on telco-grade software, managing millions of calls per month.

Go ahead, try it! Click the button below.

Other Articles

Other Articles

The future of intelligent voice

The future of intelligent voice

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do? Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first...

read more
Boosting the development of voice-enabled virtual assistants

Boosting the development of voice-enabled virtual assistants

PhoneMyBot by Interactive Media is a service that transforms chatbots, that work only on text conversations, into voice-enabled virtual assistants. To do this, PhoneMyBot terminates the voice channel – be it a telephone line, a recorded voice message, or other...

read more

Interact with us

Subscription

Receive our exclusive content:

My view of Text-to-Speech (TTS) technology evolution in 3 fundamental steps

My view of Text-to-Speech (TTS) technology evolution in 3 fundamental steps

My view of Text-to-Speech (TTS) technology evolution in 3 fundamental steps

Written by

The Author co-founded Interactive Media in 1996 and is the CEO of the company. Interactive Media is a global developer and vendor of speech applications.

Interactive Media has a long history of developing progressively more sophisticated speech applications, with more than 25 years of experience in managing text to speech. But I started working on this even before and I can say that I have been involved in CTI (computer-telephony integration) since its beginning. I want to give a brief perspective of my first-hand experience here.

In 1993 and in the following years I had the privilege to collaborate with the CSELT, the pre-eminent Italian telecommunications lab in Torino. CSELT had then already been working on Text-to-Speech technologies for decades.

Back in the 70s CSELT was, together with AT&T, the only company working on developing TTS for commercial use. Their first publicly demonstrated system was called MUSA. You can hear it speak in this video (in Italian): https://www.youtube.com/watch?v=TvKChDE-Lnk.

In 1993 CSELT released Eloquens, also based on diphones concatenation (diphones are the sounds that we make from the half of a phoneme to half of the next phoneme when we speak a word). Eloquens’ quality was much better than MUSA, and even now it can be considered a good quality product. It is still in use for several applications. See for instance https://www.youtube.com/watch?v=sZuV1L7cqro.

Other Articles

Record with the nursery rhyme Fra Martino campanaro (Brother John), as sung by MUSA in 1978

Eloquens software had been developed to be used on a stand-alone PC. But CSELT, that was owned by the national telephone company, naturally had the goal to use it on the telephone network. This is where I came in. At that time, I was a consultant for an Italian company that had the exclusive sale rights for Italy of the computer boards made by Natural Microsystems, and American company. These were among the first CTI boards which allowed a PC to communicate with the telephone network.

My role was to adapt the Eloquens software to run with the board’s DSPs, so that it could be used in IVR-type applications. I remember these days as an extraordinary period. Aside from the project, which was very interesting, I was a young engineer just out of university and spending a long period of time away from home for the first time. Torino was at that time a heavily industrial city and at 8:30 pm all restaurants were empty, and no-one was in the streets. The following day the factory sirens would go off before dawn to mark the start of a new working day. This was quite different from my hometown, Roma. I was working with Marcello Balestri and Luciano Nebbia’s group: they were excellent engineers, like most of the staff at CSELT. Together we were then able to develop and release the first Italian version, and one of the first in the world, of a commercial TTS that could be used in an IVR system.

Even today, after 30 years, that software is still deployed in some companies. This is also because only in the past few years there has been a substantial technological leap with noticeably better performance, thanks to the use of neural networks and in particular deep learning techniques. Training neural networks to perform TTS, the process does not rely on diphones concatenation and so it avoids the “pixelation” that is still present in older systems. Using deep learning the prosody is practically perfect, and people can sometime not tell a synthetic voice apart from an original human speaker.

One interesting capability of this technology is the possibility to create one’s own synthetic voice, by recording a few hours of audio, for instance by reading a text. Among the most otherworldly applications is the use of a synthetic voice to create a digital persona for a person, even after that person has passed away.

To speak of more worldly affairs, recently Interactive Media won a contract to produce all the audio responses in TIM Brazil’s customer service systems, using a Neural TTS from Microsoft. The resulting quality is amazing, and the caller has the feeling that the speaker is a person: polite, sympathetic and helpful, while still professional sounding. We at Interactive Media are ready to expand on this experience, with the know-how that we accumulated in 25 years, on all other markets. Please contact us if the voice that you use to talk with your customers is important to you.

Other Articles

The future of intelligent voice

The future of intelligent voice

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do? Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first...

read more
Boosting the development of voice-enabled virtual assistants

Boosting the development of voice-enabled virtual assistants

PhoneMyBot by Interactive Media is a service that transforms chatbots, that work only on text conversations, into voice-enabled virtual assistants. To do this, PhoneMyBot terminates the voice channel – be it a telephone line, a recorded voice message, or other...

read more

Interact with us

Subscription

Receive our exclusive content:

Boosting the development of voice-enabled virtual assistants

Boosting the development of voice-enabled virtual assistants

Boosting the development of voice-enabled virtual assistants

Written by

PhoneMyBot by Interactive Media is a service that transforms chatbots, that work only on text conversations, into voice-enabled virtual assistants. To do this, PhoneMyBot terminates the voice channel – be it a telephone line, a recorded voice message, or other streaming voice channels, transforms the voice into text through a speech-to-text service, and sends the text over to the chatbot.

When PhoneMyBot receives the answer as a text message from the chatbot, it renders it into speech and pipes it back to the user. You can learn more about PhoneMyBot here.

There are many nuances and details that are missing from the description above (some of them are patent-pending), but a key to PhoneMyBot’s success is the ability to integrate with many chatbot platforms. PhoneMyBot offers a standard cloud API that chatbots can use, but it also includes adaptors that use the chatbot platforms’ native API, simulating a simple web client. This way, PhoneMyBot can communicate with existing chatbot deployments without the need for new developments in the chatbot code. At the moment, PhoneMyBot deploys adaptors for about 10 chatbot platforms, but new ones are coming out all the time, depending on our customers’ needs. If you don’t see an adaptor for your platform, let us know and we can add it.

Other Articles

This service was designed to make it cheap and immediate to add voice to an existing chatbot deployment – and it does that, but as an interesting side effect it also lowers the cost of new voicebot developments, while speeding up their deployment time.

Why is that? It all comes down to the dynamics of the conversational AI market for enterprise customers.

A successful conversational AI project entails more than just software and communications. It needs to be tailored to the company’s workflow, products and services, and lingo. Often, the type of language that needs to be used is not the same as in a general-purpose conversation, and this requires conversational applications to be trained to better support it. Of course, this is a common requirement in this type of project, and conversational AI platforms support language customization. But it still means that project development, testing, refining, and deployment take substantial time and effort.

Now, there are only so many conversational AI vendors offering voice integration, and system integrators who can use their platform to implement projects. In addition to the conversational AI part, a voice-enabled project includes integration with the telephone network or the corporate PBX, insertion into the IVR flow, and integration with the voice path in the contact center – both to forward calls if the virtual assistant cannot service them completely, and to provide call-associated data to human agents to make their work easier and provide better service.

All this requires specialized expertise, which few vendors have. These companies and people are in high demand, so delays can be long and costs high. 

But PhoneMyBot provides a ready alternative, with its pre-integrated voice channels. It includes telephone network and WhatsApp connectivity, and APIs to transfer calls to other voice endpoints (for instance, a contact center queue). Interactive Media has tons of experience integrating with the most common contact center suites both to insert the virtual assistant into the IVR flow and to send data attached to calls to the human agent who is servicing it.

This means that the pool of vendors that can bid on a voice-enabled conversational AI project is suddenly much bigger. Even companies with little or no voice expertise can now deliver a high-quality omnichannel virtual assistant: they only need to test their PhoneMyBot integration and iron out any small wrinkle that the additional channel may create in their conversational application strategy.

There are many more text-only conversational AI offers than voice-enabled ones. PhoneMyBot opens the omnichannel market to them, which benefits vendors, their customers, and ultimately the customer experience that you and I receive when we call a customer service line.

Other Articles

The future of intelligent voice

The future of intelligent voice

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do? Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first...

read more
Boosting the development of voice-enabled virtual assistants

Boosting the development of voice-enabled virtual assistants

PhoneMyBot by Interactive Media is a service that transforms chatbots, that work only on text conversations, into voice-enabled virtual assistants. To do this, PhoneMyBot terminates the voice channel – be it a telephone line, a recorded voice message, or other...

read more

Interact with us

Subscription

Receive our exclusive content:

WhatsApp voice messages and how chatbot can use them

WhatsApp voice messages and how chatbot can use them

WhatsApp voice messages and how chatbot can use them

Written by

WhatsApp lets people record and send voice messages. What does it mean for the chatbot customer experience?

Like most Europeans – well, I should say most people in the world – I am a WhatsApp user. WhatsApp has more than 2 billion users worldwide, about a quarter of all humans. And although WhatsApp’s penetration in the United States is lower than in most places, if you are a foreign-born US resident who wants to keep in touch with friends and family back home, like me, WhatsApp is THE app to use.

WhatsApp offers chats, voice calls, video calls, one-on-one or among ad-hoc or organized groups. It also has a business offer, allowing companies to be messaged or called on WhatsApp to be where their customers are.

This feature was introduced in 2018 and is being used more and more: people appreciate using the same app to communicate with individuals and companies, and many telecommunications vendors resell WhatsApp business numbers and the services that come with them.

While I am a member of a couple of organized groups, I mostly use the app to message my friends or call them directly, rarely involving more than one person at a time. But I noticed a funny thing: some of my friends have stopped sending chat messages altogether. Instead, they use another feature of the app, that lets you record a voice message and send it over in a conversation. I prefer to type and let the autocompletion feature on my smartphone work its magic, also considering that receiving a voice message is certainly less immediate than reading a short text. But I can see several reasons for preferring to send a voice recording.

Other Articles

For instance, you may be on the go, without the time and place to type. Or you may have troubles seeing the phone keyboard, either because of light conditions or because you can’t see very well (I certainly have problems typing without my reading glasses, I am at that stage of life). You may want to be more expressive using your tone of voice: spoken communication is much better than text to convey feelings. Or you may not be comfortable writing in general – or the person on the other side may have problems reading. For all these reasons, and possibly others that I can’t think of, sending voice messages instead of typing is on the rise.

And this is fine, as long as you communicate with a human who speaks the same language as you. But there is a special use case that is completely destroyed by this habit: communicating with a chatbot. You see, businesses that use WhatsApp to communicate with their customers via text messages often employ chatbots, automatic “conversational AI” attendants that use natural language capabilities to converse with people, understand the reason for the interaction and help them in a more efficient and cheaper way than having a human customer representative on the line the whole time. Except that chatbots can understand WRITTEN communication, and not voice recordings.

Instead, more and more chatbots that connect with WhatsApp receive recorded voice messages. In this case there are two possibilities: the chatbot recognizes that it cannot access the message and dumps the session. Or it transfers the session to a human agent who listens to the message, researches the answer, and writes back. The first case of course brings to an awful customer experience, the second to a substantial increase in costs, as the human agent is doing the job that the chatbot could do, having to listen to sometimes long and rambling messages to extract meaning.

 

What is there to do? Interactive Media, the company where I work, has launched PhoneMyBot, a service that provides an alternative, cheaper and far more elegant solution to the problem. PhoneMyBot was born to expand the channels available to chatbots to include voice channels. It provides a telephone network interface, along with other voice integrations, transcribing the users’ utterances and sending them to the chatbot, and receiving text in return from the chatbot, transforming it into speech, and sending it back to the user over the voice network. PhoneMyBot is completely cloud-based, and also integrates with a number of contact center suites to transfer the call to a human agent if necessary.

In addition, PhoneMyBot integrates with WhatsApp to receive a recorded voice message in a set language from a chatbot, transcribe it, and send it back to the chatbot as text. All the chatbot has to do is communicate with PhoneMyBot’s WhatsApp number to set the language, send the voice file, and receive the transcription. PhoneMyBot also exposes a standard HTTPS-based API for that, which the chatbot can use with a small development effort.

It may be that the primary reason some people use WhatsApp’s recorded voice messages feature is that they have difficulties reading and writing. You may think this is a problem of the past, overcome now everywhere. But not so fast. The latest figures for United States residents put the non-literacy rate at about 1%. The US is in the middle of the pack here: China (3%), Brazil (7%), India (25%) fare a lot worse. (See https://www.macrotrends.net/countries/ranking/literacy-rate for a complete list). The figures for people who have basic literacy but are uncomfortable reading and writing is likely much higher. So, this is a real possibility.

In addition, PhoneMyBot can also convert the text received from the chatbot to speech (with a choice of voices) and send it back to the chatbot to attach to the WhatsApp response message. This way, users who would like to conduct the complete conversation with recorded messages can receive the chatbot’s answer on their preferred channel.

Sometimes useful features in products and services have unintended consequences. I am sure that when WhatsApp introduced their voice messages feature, they were thinking of human-to-human communications only and for this use case it is a great alternative. But it breaks other use cases, like human-to-machine interactions. Fortunately, PhoneMyBot is there to fix it.

You can try PhoneMyBot’s WhatsApp message transcription right now. To get started scan the code below, fire up WhatsApp on your phone and start the interaction with the word “start” as first message. If you type “help”, PhoneMyBot sends you details on how to use the service.

Other Articles

The future of intelligent voice

The future of intelligent voice

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do? Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first...

read more
Boosting the development of voice-enabled virtual assistants

Boosting the development of voice-enabled virtual assistants

PhoneMyBot by Interactive Media is a service that transforms chatbots, that work only on text conversations, into voice-enabled virtual assistants. To do this, PhoneMyBot terminates the voice channel – be it a telephone line, a recorded voice message, or other...

read more

Interact with us

Subscription

Receive our exclusive content:

Speech-to-Text results optimization with Interactive Media’s solutions

Speech-to-Text results optimization with Interactive Media’s solutions

Speech-to-Text results optimization with Interactive Media’s solutions

Written by

An historical perspective

Interactive Media has offered Conversational AI solutions for many years, focusing on voice-enabled Virtual Agents. We deployed our first conversational Virtual Agents way before Conversational AI was a buzz-word and the explosion of self-service conversational deployments. 

Having focused on voice since the beginning, we are keenly aware of the challenges that come with converting the spoken utterances coming from users into text that conversational systems can use.

This is because conversational AI Virtual Agents can hold a spoken conversation, for instance on the phone, but their AI brain works on text. So, they need to convert the sentences spoken by humans into their text counterpart, and the text that the system uses to answer back into speech.

Ten years ago, the options available on the market to interpret speech and convert it into text (ASR, Automatic Speech Recognition, or Speech-to-Text) were limited. One company, Nuance, dominated the field, having developed their own technology, or acquired smaller competitors in different countries to offer Speech-to-Text in different languages. So, initially Interactive Media relied on Nuance’s technology for all its voice-enabled Virtual Agent deployments.

Other Articles

Today’s landscape

The state of the technology is vastly different now. The wide adoption of AI has changed the way human speech is interpreted by machines in a substantial way, making the task to develop Speech-to-Text systems much easier and performance much better – meaning that transcription precision has improved significantly. Speech-to-Text offers have exploded in number and dozens of companies now provide the service, either directly from the public Cloud or integrated more strictly with speech applications.

However, speech is not the same for all people and applications. The variations are staggering. People speak in different ways depending on what they want, what is being asked of them, where they are in a conversation, and of course in dozens of different languages. Providing a Speech-to-Text service that covers effectively all the variations and parts of a conversation is exceedingly hard. So, inevitably some services are better than other for specific tasks and languages.

Interactive Media’s approach to Speech-to-Text

Since Speech-to-Text is still integral to Interactive Media’s offer, we are constantly monitoring its advances and testing different services on a day-to-day basis. We have developed metrics and standardized test suites to inform the decision of what service to use for the benefit of our customers, depending on the use case which dictates the task at hand, the settings, and the language.

What’s the benefit? We have found that the main general-purpose Speech-to-Text services have some weak points, for instance when the task is to fill in a form with numbers or alphanumeric strings. In this case the field of results is limited, but some services don’t seem to use this to their advantage and retain the same percentage of correct recognition as the general speech. But while a 95% recognition accuracy is usually enough to find out an intent (for instance), when you need to take in a string of 10 digits, you’ll get it wrong roughly 40% of the times.

However, other Speech-to-Text engines are optimized for recognizing digits or allow the user to define tight grammars that can help with the task. Using these engines, you can get an accuracy up to 99%, which over 10 digits results in a 90% probability to get the whole string right.

Similarly, there are more common tasks that need optimization for the Virtual Agent to be effective. Maybe the most challenging one is transcribing an email address. Human agents have a hard time with it, and the percentage of errors is exceedingly high. Again, some Speech-to-Text services do better than others and even a 5% difference makes it worth it to switch to a better performing service in mid-call if the volume of traffic is high enough.

So, we engineered our platform to use several of the best Speech-to-Text services, constantly testing the connected services and adding new ones as they become available. It’s a big task, but (we think) we are being fairly smart about it: we model conversations by defining categories of tasks that Virtual Agents must accomplish, and continuously test each of the services we integrate with using sample atomic interactions belonging to each category. This way, we derive scores for the various services for each task, in several languages.

This would be academic without a way for the Virtual Agent application to tell us what to expect. So, we added this feature to all our services, provided by the PhoneMyBot and OMNIA platforms. The API allows to specify the expected category of utterance coming from the user, based on the question being asked. So for instance, if the system prompts the user to provide a numerical code, the service knows that the next utterance is most likely composed of numbers, and will use the Speech-to-Text engine with the best performance recognizing them.​

The difference in performance is substantial – if even 10% less calls have to be forwarded to human agents, especially when the task is simply collecting data from the customer, the customer experience is better and the ROI for our customers soars, which is the promise of Virtual Agents, delivered.

Other Articles

The future of intelligent voice

The future of intelligent voice

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do? Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first...

read more
Boosting the development of voice-enabled virtual assistants

Boosting the development of voice-enabled virtual assistants

PhoneMyBot by Interactive Media is a service that transforms chatbots, that work only on text conversations, into voice-enabled virtual assistants. To do this, PhoneMyBot terminates the voice channel – be it a telephone line, a recorded voice message, or other...

read more

Interact with us

Subscription

Receive our exclusive content:

PhoneMyBot outbound service

PhoneMyBot outbound service

PhoneMyBot outbound service

Written by

When people think of chatbots, mostly they envision little helpers popping up on the lower right side of webpages. Maybe a bit annoying if you are not looking for anything particular, often helpful, they take away the guesswork of navigating to the right information within the site by interacting with users in natural language. Users write a question, the chatbot interprets its meaning and answers with the information. Or maybe not – depending on how well made the chatbot is.

Chatbots are supplanting the venerable website FAQs section, provide services and answers for the most common needs, even let users perform some self-service tasks like order products or make payments. This way, chatbots improve the customer experience and service most of the interactions in self-service mode, while costing a fraction of live agents, who can concentrate on the interactions that chatbots cannot solve and require creativity and human touch.

But chatbots always need users to come to them and initiate the interaction.

The reason is obvious: how can a website reach out to users who are not “visiting” its pages? True, chatbots also use other channels: messaging services (WhatsApp, Facebook Messenger), text messages, email. These can be used to start conversations and sometimes they are. But it’s not common or immediate: people are not necessarily watching their messaging apps all the time and messages from companies can be ignored easily.

Other Articles

There are good reasons for companies to reach users proactively and immediately: for instance, to remind them of an appointment and give them the ability to reschedule. Or to confirm an order before it ships. Text messages can be used for that but there’s no guarantee that the answer will be fast – or there will be an answer at all. The main way to reach people quickly and with real-time feedback is a phone call: the phone will ring and if the user answers the unfolding conversation allows to go over the matter completely and with a high degree of certainty. So, this is now done with automatic dialers that are backed up by live agents, which is also expensive and not pleasant for the agents themselves. Too bad that chatbots cannot use the phone.

Or can they?

PhoneMyBot by Interactive Media provides services that allow chatbots to seamlessly operate on voice channels, starting with the telephone. It is a Cloud-based environment with connectivity to the telephone network, APIs to connect with the chatbots, and using multiple speech-to-text and text-to-speech services to “translate” between the voice-based and text-based ends.

PhoneMyBot uses a layer of software adaptors to natively talk with several common conversational AI frameworks. Chatbots based on these frameworks don’t have to do anything to interact with voice users: they see the endpoint as just another website-based client. But of course, this is for incoming calls.

But PhoneMyBot also exposes a standard cloud API that the chatbots can use, and it supports placing calls to telephones. Once the call is established, the chatbot interacts with the user like in any other chat conversation, leaving to PhoneMyBot the task of converting between text and voice. If the call cannot be connected, or it goes to voicemail, the chatbot receives a message from PhoneMyBot and can continue to the next call.

The applications are numerous, all resulting in better customer experience and lower costs for the company:

  • Reservations and scheduling
  • Order confirmations, delivery alerts
  • Reminders or appointment confirmation
  • Service renewal
  • Upselling

With its outbound service, PhoneMyBot allows to use chatbots in a completely new way, giving voice to their chat and opening new perspectives. To learn more please visit https://www.phonemybot.com or contact us at info@phonemybot.com.

Other Articles

The future of intelligent voice

The future of intelligent voice

As the market for smart speakers falters, what are the Big Three (Amazon, Apple, Google) going to do? Alexa, should I bring an umbrella out tomorrow? This is a question that owners of smart speakers have been asking since 2013, the year when Amazon released its first...

read more
Boosting the development of voice-enabled virtual assistants

Boosting the development of voice-enabled virtual assistants

PhoneMyBot by Interactive Media is a service that transforms chatbots, that work only on text conversations, into voice-enabled virtual assistants. To do this, PhoneMyBot terminates the voice channel – be it a telephone line, a recorded voice message, or other...

read more

Interact with us

Subscription

Receive our exclusive content: