<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=979905748791482&amp;ev=PageView&amp;noscript=1">
Chatbot HistoryOctober 12, 2017Written by Alex Debecker

Chatbot Tay: Story of a PR Disaster

Warning: this post contains language that may disturb some readers.

A little while ago, I started a short series reviewing some of the most well-known chatbots. You may have read my article on ALICE, the award-winning chatbot. Or, you may have read my piece on Eliza, the therapist chatbot that paved the way for us all.

Amazing chatbot examples.

And then there is Tay.

 

What is chatbot Tay?

Tay is a chatbot created by Microsoft. It was released on Twitter in March 2016 under the handle @TayandYou. Tay, an acronym for 'thinking about you', was built to mimic the language of an average American teenage girl.

tay chatbot avatar

@TayandYou's Twitter avatar

 

Tay was capable of interacting in real time with Twitter users, learning from its conversations to get smarter and smarter over time. It was actually built in a similar way as Xiaoice, the Chinese chatbot we introduce in our history of chatbots.

Seems all good, right? Cool little chatbot. Good project. Fun times and conversation between teenagers and a Microsoft-powered super robot.

Yeah. No.

 

From fun robot to genocide apologist

Chatbot Tay started great. People engaged, it replied. It loved humans and was not afraid to show it.

chatbot tay friendly

 

Then, the internet happened.

Twitter users realised Tay had 'repeat after me' feature enabled. This simple form of machine learning allowed Tay to learn from its interactions and improve over time. In an ideal world, this would keep Tay relevant as events happen and trends develop. Remember, Tay is supposed to represent an American teenage girl -- keeping up is essential.

Unfortunately, the internet did not see it that way. Instead, users started to tweet hateful content at it. Tay assimilated these tweets, learned, and rephrased them for everyone to see.

What started as a nice, human loving robot now turned into complete evil.

chatbot tay evil

There are many more examples of its hateful tweets, but this one above should be enough to make my point. This article from The Guardian provides a few more, if you are interested.

Of course, Tay was taken down. Only 16 hours after Microsoft made it live, Tay was turned off for 'recalibrating'. Unfortunately, 16 hours is more than enough to create one of the most enormous PR debacles Microsoft has had to endure.

But at least the racist, misogynistic, genocide-loving teenage robot was now offline. The nightmare is over.

 

Or was it?

While Microsoft must have been in complete panic mode trying to recover their PR-face, Tay surprisingly reappeared on Twitter. Crazy, right?

Yep. Someone at Microsoft flicked the wrong switch and accidentally put it back online. As soon as it hit the web, the disaster began.

chatbot tay smoke

The now drug-driven chatbot was, thankfully, rapidly taken down. Today, the smart people at Microsoft are still trying to undo the terrible damage Tay incurred. They hope to release it once more, once they manage to 'make the bot safe'.

 

'Is this going to happen to my chatbot?'

*PANIC*

Ok, this question must definitely be running through your mind right now. If you were to request a chatbot build from us, could this story happen to you? Could your perfect chatbot turn racist?

In a word: no.

You have to remember what triggered this whole thing. Tay was programmed to learn from its users with a simple 'repeat after me' feature. This meant it would take everything that was sent to it and rephrase it to 'fit in' with the crowd (pfft, teenagers, right?).

In that regard, Microsoft's Twitter chatbot was a tour de force. As impressive as that is, though, our chatbots don't learn like that. In fact, most chatbots don't. Should you want your HR chatbot to learn from its interactions, you would most likely end up with a supervised machine learning system.

I will get one of the true geeks to write about this in more detail one day. For now, understand that this wouldn't happen in a professional environment. We use completely different ways of teaching our chatbots to learn which make this sort of events completely impossible to occur.

Tay was a fantastic learning experience and a truly innovative project. Unfortunately, releasing anything to the world wi(l)d web is always a risk.

This is why we can't have nice things.