By telling a user what the chatbot is there to do it sets expectations and gives instructions on how the user can achieve their objective. Very few chatbots exist purely to entertain, the majority exist to solve a user's problem. Just tell the user the best way to talk to solve it. Simples.
Here are some examples of how to make it clear a machine, not a human, is talking.
Call the user agent "something-bot". Sally the Sales bot, Customer service CliveBot, George the chatbot, Sharon the PA bot... you get the idea.
Put a sentence in the very first message the bot sends to explain what it is and what it can do. "Hi, I'm Dean Withey Bot. I can help you book a meeting with Dean, tell you his latest posts and get you his contact details."
Tip 2: Tell the user how to speak to a human
A chatbot exists to help a user, to solve their problem and get them where they need to go. We are pretty good at making chatbots, but, even we cannot make a chatbot do evvvveeerrryything. Sometimes, you know, real humans have to get involved.
By telling a user how to get in contact with a person (and falling back to that human gracefully -- read pt. 6.c. to learn more), you are solving problems faster. After all, if you scope the chatbot well, it will still take care of the majority of low-hanging inbound conversations.
When done well, a chatbot can make human-human interaction more efficient and a better experience. If your company does not have the resources to facilitate a live human take-over, simply have it grab some details and take a message. "Sorry, no one is around at the moment, leave your contact number or email, and someone will get back to you within an hour".
Here are some of our best practices on informing how to talk to a human:
Give users an immediate choice to speak to a human. We have had success with website live-chat bots where the first message forces the user to make a decision. Example: "Hi, I am a customer service chatbot, I can help you manage your account, check your orders and change your booking. If you'd like to do any of this just respond with hi. If you want to speak to a human, reply with the word HUMAN and someone will be right here."
The "I don't understand" intercept. Regardless of how many thousands of hours you spend training the library, the chatbot is not always going to understand a user. Use this scenario to introduce a human takeover. "I'm sorry pal. I'm just not getting you. To speak to a human reply with human and I'll get out of your way".
Persistent keywords. All of our chatbots ship with built-in black and whitelisted words (trust me, you do not ever want to read the blacklisted word list). In the whitelist, we add what we call "angry words". These are global keywords which, when a user sends, automatically trigger a 'how to speak to a human message'. Examples of these words are: "Human, OMG shut up, go away, I want to speak to a human". We tend to rely on exact match logic for these keywords rather than any NLP - this makes sure they are never missed.
Persistent menus. If a chatbot is deployed via a menu-compatible messaging app (like Facebook Messenger), then you absolutely must ensure one of those buttons is a 'speak to a human trigger'. Even if there is no human available for takeover, have the chatbot take a message and get someone to contact them. Remember, the chatbot is solving a problem, not making things worse.
Tip 3: Get the boring legal stuff out of the way
While no one has reeeaaallllyyy started talking about the legals on communicating with chatbots, they soon will. It might be a year or more away, but soon, chatbots are going to have to include clear opt-in, terms and conditions and privacy documentation.
If nothing else, in the EU on 25th May 2018, the General Data Protection Regulation (GDPR) becomes enforceable. It will almost certainly affect machine-driven online conversations.
On a personal note, I believe that when people are confronted with opt-in and legal documentation, it makes everything more official, making them more willing to interact.
If I am talking to a chatbot about my personal circumstances and giving it my data, I want to be mighty sure the company has at least pretended to jump through some legal hoops and considered data privacy, storage and transmission. Presenting me with opt-in and legal stuff gives me confidence and, therefore, I invest more understanding and time.
There are three main ways we encourage incorporating opt-ins and legals into chatbot conversations; each one has a different level of 'effort' by the user, and, broadly speaking, the more effort required, the higher the abandonment of a conversation. However, that being said, if a user gets through the highest level of effort, you can pretty much call them super-engaged!
Effort-based opt-in and legal compliance. An example could be to send an entirely seperate message in the onboarding sequence with two buttons underneath. One button is "Yes" or "I agree" and the other button is "No". The message can be as number 1 above, but to continue the experience, the user has to click the positive button. If they click the no button, the chatbot sends a message to say something like "Sorry, that means we can't continue talking - if you change your mind just click the Yes button above."
So, there we have it. Admittedly the whole legal bit was a bit dull, but there are my top 3 tips for a 'proper' chatbot onboarding.