chatbot_security.jpg

You know, I was excited about getting back into writing every day.

After a month of not prioritising content, this week I was back in the game. I sat down this morning with coffee and cat on my lap, headed to our editorial calendar to check what I’m penciled in to write and BAAAM.

Chatbot security.

Yukk! Argh, what a topic...

But, alas, the brain starts whirring... hmm.

Chatbots that help with home security? They could tie into cameras, facial/motion detection and alert you when something is amiss? Meh, am sure it has already been done.

A chatbot for security guards? Telling them when it’s time to get up and walk to this point, check that door or make another cup of coffee? Nope, clutching at straws that one.

Oh oh, a chatbot for personal security? 'Help, I’m being attacked, help me Mr. Chatbot Police-machine-thing.' Argh, nope, no legs on that one (literally).

Right, it seems as though I’m going to have to go full dry mode.

Martini style.

 

Introducing chatbot security

Lets, *yawn*, talk about security concerns with businesses implementing and using chatbots.

To be fair, with such a mass proliferation of the technology there has been very little security content produced.

As these little thingies are becoming more and more intelligent, they’re connecting to more services and being powered up with more advanced capabilities.

Connecting to financial institutions, collecting privileged and personal data, monitoring a user’s location and movement are already pretty out-of-the-box for our developments.

Yes, they’re new, shiny and exciting, but both businesses and consumers should give a quick thought to what’s happening behind the scenes.

 

Be aware of GDPR

First, let’s look at how data is handled, particularly relevant with the looming GDPR.

Chatbots that learn over time tend to take all incoming messages and dump them into code soup to run machine learning algorithms on. As a consumer, if I asked for my data back, is it possible? As a business, where else is this data used and who can peep at it?

(And can I see the code soup ‘cos it sounds cool).

 

The future of chatbot security

How about the technology-based implications of chatbot security?

As leading geeks in the field, we have seen chatbots adopting standard security techniques found in conventional mobile technology. Really, chatbots have been pigeonholed into things like single sign-on (SSO), two-factor authentication (2FA) and biometrics.

While this is fine for now, as NLP and ML become more of a commodity, we will need security thinkers and geeks to take another pass.

Tokens are a good example; Facebook Messenger chatbots use them quite a lot. You talk to your banking chatbot and log-in to it using your standard bank username and password. You are given the opportunity to register your Facebook ID to your bank account. Now, like magic, your Facebook chatbot can seamlessly access your banking information...

Also, even though I don't nearly have enough geek power to be able to talk knowledgeably about encryption. But, is consumer data encrypted in transit? Is it encrypted at rest? Where is it even resting? And does data when it’s sleeping look the same as data when it’s moving (or is that just me)?

 

Security starts at home

Lastly, research (a trusted Google search) tells me that most security breaches are due to social engineering rather than super-geeks. You know, things like network users clicking a dodgy link, downloading a funky pdf or James Bond leaving his briefcase on a train.

Chatbots are flying off our shelves and consumers readily adopting them. How long will it take for messaging app phishing scams, imposter chatbots and spoofing to start? Has it already started (dun dun duuuuuun)?

As we continue to improve our NLP/NLU, how long will it be until we can convincingly replicate a human?

'Hi Dean, it’s your mum. Had a phone call today from Santander, can you give them a ring on 0800 SCAM ME. Thx. Don’t forget to leave your dirty socks out'.

(FYI, my mother doesn’t do my washing anymore.)

Like most other forms of social engineering, education is going to be key. The goal is to raise the general awareness of chatbot users, so they don’t just assume, click or trust.

 

Conclusion

So yeah, I didn’t get to write about super smart chatbots that ‘do’ security, but, I hope my warble wasn’t too dry for you.

You all know I think chatbots represent a new paradigm in how humans and machines interact. They’re a huge boon for service, sales, and internal communication. However, like most technologies, they come with security caveats and concerns attached.

By learning and forming best practices, hopefully, little chatbots and humans can get along just fine.

Seen any security concerns in the chatbots you’ve used? Let me know, show me on Twitter or LinkedIn).