3 roadblocks chatbots face

While we think of them as the latest thing in tech, conversational interfaces have been around for quite some time. From Cleverbot and Smarter Child to labyrinthine phone trees (“say REPRESENTATIVE”), we have been trying to build technology that mimics how we interact with humans for years. Recent advances in technology have positioned these tools for substantial growth and brought them back to the foreground of the conversation on the future of technology.

Conversational interfaces for both speech and text have risen to prominence thanks to virtual assistants or “chatbots,” such as Apple’s Siri and Amazon’s Alexa. Also, text-based chatbots or messaging platforms such as Slack and Facebook Messenger have seen a huge spike in utilization. All of this excitement is causing an influx in investment and interest in the market. Major brands like Domino’s are using virtual assistants to help customers order food and healthcare companies like Healthtap have leveraged Facebook Messenger to connect patients with doctors.

While these companies have seen success with their platforms, they are also going to run into inevitable roadblocks. Is this technology as close to performing complex tasks as we think? In what contexts will users decide to turn to chatbots in place of other options?

While we don’t have all the answers, we can make some educated guesses on what problems chatbots may face based on what we know about the history of chatbots and how people have historically interacted with technology.

Roadblock #1: Language is complicated

Modern applications of conversational interfaces are built on advances in AI and the ubiquity of connected devices to provide users with shortcuts to complete generally simple requests, such as getting an answer to a straightforward question (“What’s the weather like in Chicago?”) or completing a quick task (“Remind me to call Linda in 30 minutes”).

Like software, languages are built on a system of rules that develop and evolve over time. However, people who speak a language are unlike computers, not locked down by these rules and free to form sentences and even words to convey a message. Beyond regional dialects, individuals develop unique patterns of speech, and humans are generally pretty good at understanding each other even when the syntax strays far from the rules of language. When Stephen Colbert spoke of “truthiness,” it wasn’t hard to discern what was meant, despite the word not yet existing in the dictionary. Computers, historically, have had a harder time understanding words like that. Machine learning has catalyzed speech recognition, but we’re not yet at a point where AI can keep up with the rapid evolution in speech or understand every particular way of speaking.

In traditional interfaces, accessibility is often tied to the user’s abilities in regard to visual processing or physical manipulation. As conversation enters the fold, the ability for the system to understand and respond to the user’s unique patterns of speech, whether spoken or typed, adds a new layer of accessibility. A system that understands only perfectly formatted speech will be inaccessible to vast numbers of people who will find it tedious, at best, to rephrase to suit the computer. Likewise, a system that doesn’t respond with similarly familiar language will never be able to form the sort of familiar bond many product makers will desire.

Roadblock #2: Trust and understanding

We have seen a number of effective chatbots on the service side, such as those for banking. These days, it’s not uncommon to call your bank to check your balance without ever speaking to a human. With this, we have already begun to see handoffs between bots and humans, bridging the gap between automation and personal attention. These interactions, which are generally objective in nature, work well for AIs. People, in general, seem to trust computers with basic facts and figures, especially numbers (as in the case of an account balance).

When you move past objective insights toward more subjective thinking, things get complicated. Telemedicine, which provides patients with the ability to see a healthcare provider without leaving the house via remote communication tools, is increasing in popularity in the healthcare space. This shift from an in-person to digital approach works because it’s clear there is still a human on the other end. If you took away the human and replaced it with a bot, would the level of trust stay the same? Almost certainly not. The more objective part, taking a list of symptoms and spitting out a list of potential diagnoses, might work — in much the same way people use WebMD today. But getting from a list of potential diagnoses to an actual outcome, as your doctor would often arrive at, requires complex understanding and judgment. Beyond the difficulty in getting this right, lies the even greater burden of trust and earning user acceptance of a computer-generated outcome.

Roadblock #3: Ease of use

So far, chatbots that have come to market are not introducing wholly new behaviors, but rather new approaches to old problems. You can check your account balance, buy clothes, and order flowers, from any web browser, and in many cases it’ll be a pretty familiar process. For chatbots to succeed, they’ll need to provide an experience that is fundamentally better than the familiar alternatives.

With chatbots, a better experience will mean simplicity and seamlessness. Sitting through a machine reading out a list of options isn’t fun, especially when you know you could select an option on the web in a fraction of the time. For some tasks, chatbots may never be the answer, as they necessarily can’t be simplified enough to work through this sort of interface. For other tasks, however, combinations of product simplification, personalization, and effective machine learning can help solve many of these problems. By understanding the user more thoroughly, these systems can provide unique value and streamline tasks to the point that a conversation may provide the simplest interaction.

Looking to the future

As time goes on, language processing and overall AI will continuously improve and open new opportunities for chatbots to perform complicated tasks with minimal input. While most bots are designed to replicate human processes, AI is a long way from being able to solve many complex human problems, or to do so in a way that’s easier than alternatives. But we’re much closer now than we were when SmarterChild or Moviefone made waves with significant investment being made toward continual advancement in this area.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top