Notice: Undefined index: extension in /home/hubiaiqdxo/www/wp-content/themes/Divi/epanel/custom_functions.php on line 1473

The capabilities of chatbots are limited by their very nature: they are not always up to the task when it comes to recognizing human conversation flows. That leads to those very common « I don’t understand your request, please rephrase » types of situations. Frustrating, isn’t it? It happens because the « natural » language can’t always stay that natural in programming: we can feed the bot with all the grammar rules possible but in real human conversations there will always be some (or many) linguistic nuances and various derivations from the standards. A chatbot only relying on keywords logic won’t be able to recognize those tiny details. So, the question is, how do we develop a chatbot that will be able to understand our requests in all their diversity?

Illusion of understanding

To « comprehend » a human interlocutor, a typical bot will compare what has been said with numerous amount of phrases it’s been trained on. It finds similarities with the patterns of phrases, determines the subject of the question, and performs the action programmed for this type of question. The interlocutor then has an illusion that the bot understands him: it acts logically, reacts in a human way, and carries on the conversation. The famous Turing test is based on this illusion: if the evaluator can’t determine whether he’s speaking to a machine or to a real human, the test is considered passed.

Ex-machina, a 2015 thriller of Alex Garner, is focusing on the story of a robot challenged to pass the Turing test (no spoilers!).

Chatbots learn like children

Training a chatbot is a bit like educating a child. We should instruct it on what it can or cannot say in different contexts. We should also tell it how to understand and react properly to requests and give him some rudimentary notions about how to make decisions about the above-mentioned. From early childhood, humans know how to separate the important information from the noise, take into account the context of the dialog, and understand that the same idea can be expressed in many different ways. Unfortunately, when it comes to chatbots we can’t really wait till it finishes the whole course of secondary education: it is necessary that it learns everything we have learned while growing up in a ridiculously short period of time.

So how do we train a chatbot?

There are two main ways of bot training: supervised and unsupervised. We’ve already covered why unsupervised learning is not yet adequate on the current level of AI development (after all, you won’t leave a child to learn on his own, would you ?), in our previous article Chatbots for businesses : debunking common myths. That leaves us with supervised learning. How do we prepare the terrain for the supervised learning?

In order to teach the bot to understand intents and entities (check Chatbots vocabulary: terms to know if you feel lost with those words), it is necessary to mark a bunch of texts using special programs. To teach the bot to understand the named entities – a person’s name, a company name, a location – you also need a lot of texts to analyse. Hence, on one hand, the supervised learning algorithm is the most effective as it allows to create an effective recognition system, but on the other hand, you need huge amounts of marked data sets as a training ground: this is an expensive and time-consuming process (so it’s always easier to seek help from the services with already developed NLU engines).

To get to the bottom of it, a « smart » assistant should :

  1. Understand correctly the meaning of the question
  2. Consider the context of the conversation
  3. Be able to come up with an adequate answer

Let’s dig into it for a while.

  1. Understanding training. The main technology the modern bots are using is NLU (Natural Language Understanding – read more about it here How to choose between NLP and NLU for a chatbot?). It allows the machine to understand users and run the necessary parameters to process requests. This is a quite complex technological solution.The main query needs to be separated from the “unnecessary” elements, then the engine should recognize homonyms (“book” as in a written text and “book” as in “book me a table”), choose the most appropriate one from several options, and be able to handle grammatical errors in sentences. You need to teach bots to understand numbers written in text, to recognize the meaning of a phrase with misprints, slang, or inaccurate word order. I’ll pass on less obvious things but there are thousands of rules to take into account.
    More often than not, virtual assistants, as well as chatbots, use quite superficial methods of text processing. The algorithm focuses on detecting the main “facts” or keywords that are used in the request it has received. If such facts are detected with a certain degree of probability, the rest of the text is usually ignored. This can be enough for simple business tasks.

The most common “surface” text model is the “bag of words”. This model creates a set of vectors containing the count of word occurrence, without taking into account the position of words in relation to each other: for example, the sentence “I don’t like drawing much” will be presented as {“like”, “not”, “draw”, “much”, “I”}. In the areas where full-text analysis and taking into account subtle semantic shades are required, this approach seems too light. Deep language processing methods based on fundamental approaches to text analysis will be more adequate. As a rule, the texts in this approach will pass through a pipeline of analyzers:

– Graphematical (character processing)

– Lexical (word identification)

– Morphological (analysis of word forms)

– Syntactic (evaluation of the mutual position of words and their roles)

– Semantic (revealing the meaning of words and their connections with each other)

As a result of this analysis, the engine can take further action: skipping words, lemmatization, stemming, singularizing, analysis is of grammar errors, and even synonym matching, among others.

 

2. Context training. The technology inside the chatbot should take into account the context of the conversation. For example, the question “how will you translate ‘book a room’ into French?” can be recognized both as a reservation and as a translation request. The intent is not well defined. The “smart” assistant starts to act like a person – he correlates the phrase with the samples it studied and finds the most appropriate one in terms of meaning. He thinks in classifications, correlates the new request with ones already existing. Metabots like the Hubi, which work on modular logic, depend largely on adequate context processing. An advanced context search algorithm allows Hubi to successfully use different modules within the same channel. Classifications are formed depending on the purpose of each module.

 

3. Giving an adequate answer. Even a quick internet search will give you loads of posts and screenshots with bots totally missing the point. But you can find as many on the errors of the human operators. In both cases, the problems are the same: superficial task planning, poor training base, no time for additional learning. See a chatbot as a watermelon with information instead of water: 80% if it consists of ready-made answers, which have been filled in by humans. Without examples, even the smartest neural network won’t fully understand requests. Useful content rules above all. The better a company knows its customers, the more information it can provide about their requests, the smarter the bot will be.
Developers must take every detail into account when working with content: every request must be processed correctly, extracting, and providing the right data. The further training of the chatbot must also stay simple: customer questions change all the time and the bot must not lose its relevance. Remember, chatbots are only as intelligent as they are designed to be.

What about Hubi?

Hubi.ai is a platform that allows you to create contextual modular chats from scratch. Bots created with Hubi can be built into messengers and websites. The design process takes place in two editors: a QnA editor and a scenario (dialog) editor, allowing the bot to adopt different types of logic while talking to you. We’re using our own NLU engine that allows an incredible level of bot’s understanding, even with complex databases and scenarios. Also, with an easy and efficient supervised learning process teaching your bot new things is painless.

Masha Isaeva, Learning & Chatbot hub manager at HubCollab