Assistants, bots and AI

Having been working with Microsoft Azure Bots, Cognitive Services and QnA Maker for over 12 months now I thought I’d documents some of my initial thoughts and findings.

Bad press

Virtual assistants seem to come in for quite a lot of bad press, especially for giving the wrong answers. There is always somebody that wants to ask a question and highlight an answer that is just wrong.

In one way highlighting the issue is perfectly acceptable but it doesn’t mean the bot or assistant is useless it just means that the question and answer pairs need tweaking or improving. Humans seem less inclined to highlight another human giving the wrong answer compared to a virtual assistant or bot.

Like humans virtual assistants or bots need to be trained. You are never too old to learn something new neither is a bot.

In search of the one word question

Another common issue I see is that some teams think that one word questions should get an accurate answer. This happens a lot when assistants are delivered via mobile applications.

If I wander around the campus on any given morning and walk up to 50 people and just say “Sandwiches” I’m going to get some strange looks, possibly given a couple of old cheese and pickle sandwiches, told where I can buy some or just slapped around the head for being a little aggressive.

Looking at the Application Insights for a couple of the bots that I’ve been working with there is a noticeable improvement in the answers relevancy score when a users question contains a small number of words. Too few or too many words and the relevancy score for an answer starts to fall.

Your assistant doesn’t have a doctorate from day one

One worry I often hear is “we simply can’t have the assistant give out any wrong information especially when it comes to health”.

Whilst this is a fair comment many FAQ assistants are already repeating answers that you have already published. One simple solution is to inform the user that they should seek advice from an expert if they are unsure about the answer.

Another approach is to consider the difference in responses to a medical question from a first year medical student compared to a fully qualified general practitioner or a specialist. We might not expect the first year medical student to provide the correct answer all of the time, the reason being they have less experience compared to the GP or specialist.

The same applies to your bot. The GP or specialist as had years of training and learned from experience. To get the same sort of results from your assistant or bot the same level of training and development needs to be applied.

Having said this there are now a number of frameworks available that reduce the amount of time you need to invest in developing your bot to help improve its responses for specific scenarios such as medical advice.

The stale assistant

Once an assistant is live remember to look at the content and answers on a regular basis. A bots usage will suffer if users are presented with answers that are out of date.

Content

Another challenge to solve is ensuring that the content for the FAQ is suitable for publishing via an assistant. Many FAQ answers are very detailed.

Most users prefer a short, concise answer which can be expanded on with a fuller answer. The Microsoft bot framework makes it simple to achieve this by adding a follow up answers.

QnA Maker will make an attempt at importing question and answer pairs providing they are formatted in a very specific way. Unfortunately a lot of web facing FAQ’s contain a lot of branding and as a result the extra HTML markup means QnA Maker is unable to directly import your FAQ’s.

Where QnA Maker is unable to import content from web based FAQ’s it is possible to use Powershell or C# to add FAQ question and answer pairs programmatically.

Giving an assistant a chance

There are a number of techniques to improve the adoption of your virtual assistant or FAQ bot, I will look at these in more detail in upcoming posts

  • Ask the users to rate an answer
  • Monitor sentiment and provide a method for escalating a session to a real person
  • Ensure your initial response is concise and provide a link to a more detailed answer
  • Where a session is escalated to a real person make sure the user does not have to repeat the question
  • Keep your content up to date
  • Where an answer or outcome involves a response that is medical in nature ensure the user is prompted to seek advice where they are unsure
  • Provide users with a guide on how to get the best from an assistant
  • Regularly monitor the questions being asked and the relevancy score of answers

More information

For more information about the principles of bot design view the following article.

https://docs.microsoft.com/en-us/azure/bot-service/bot-service-design-principles?view=azure-bot-service-4.0#designing-a-bot

In my next article I will look at how to put together a simple FAQ assistant.

Leave a Reply