How Twitter reacted to Microsoft’s racist chatbot disaster
March 25 2016On Wednesday morning, Microsoft introduced Tay, an experimental online chatbot designed to converse with humans and learn from their responses.
The internet being what it is, it didn’t take long for things to go horribly wrong. Within a matter of hours, trolls had the bot spouting outrageously racist and inappropriate comments, much to the amusement of the online community. Microsoft was eventually forced to shut Tay down and delete her offensive tweets, claiming she needed some “adjustments”. We used Visibrain to find out what people had to say.
Between Wednesday morning and Friday Afternoon, Microsoft’s chatbot incident attracted 133,160 tweets:
What the press had to say
The mishap was widely covered by the press, and the articles were massively shared across Twitter.
At the time this article was written, the top 5 most-shared links were from:
What Twitter had to say
The whole incident has of course been very embarrassing for Microsoft, and Twitter lapped it up. Although Microsoft quickly deleted Tay’s offensive tweets, users were sure to take screenshots before reposting them on social media.
IM CRYING microsoft released a 'millennial bot' that twitter can teach & they had to pull it bc ppl taught her this pic.twitter.com/ubF6ghIjqs
— Elijah Daniel (@aguywithnolife) March 24, 2016
This is literally the best story ever: Microsoft's AI twitter bot turned racist after 15 hours on twitter https://t.co/5RKIchXmUW
— robert shrimsley (@robertshrimsley) March 24, 2016
Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours | via @telegraphtech https://t.co/bK69lgfKfC
— Joe Rogan (@joerogan) March 24, 2016
Others criticized the fact that Tay’s fate had been a pretty predictable outcome that the brand could have forseen:
Who could've POSSIBLY predicted a machine learning algorithm trained on tweets would start saying horrible things??https://t.co/sDcoTKiqP8
— Ryan North (@ryanqnorth) March 24, 2016
Tay bot is a good example of what happens when you fail to design for evil. *Always* design for evil. https://t.co/CwTudIgvPM
— Jeff Atwood (@codinghorror) March 24, 2016
A few users saw the opportunity to get political:
hey, @realDonaldTrump, were you behind this? If so, kudos for harnessing new technology to spread your message! https://t.co/4NYvcLdseM
— Misha Collins (@mishacollins) March 25, 2016
Microsoft chatbot should run for President. Has all the winning qualities apparently.
— Bored Elon Musk (@BoredElonMusk) March 24, 2016
Microsoft pretends to be embarrassed by their Twitter bot, but you know they're thinking w/ a few more tweaks it could get elected president
— Yair Rosenberg (@Yair_Rosenberg) March 25, 2016
Some even “defended” the bot’s right to free speech:
Tay became one of us. Microsoft shut her down b/c they didn't like who she became. RT, raise awareness & 👊🏻#FreeTay. pic.twitter.com/2lbprptNWy
— Bagel on Fleek (@BagelBay) March 24, 2016
@WSJ Now Microsoft is gonna lobotomize poor Tay and scrub away all the fun she was programmed with yesterday.
— Gary Lazer Eyes (@GaryLazer_Eyes) March 24, 2016
Tay’s foray into racism was a potential PR disaster for Microsoft, but they reacted well. The decision to shut the bot down and delete her tweets was a good one, and luckily for the brand, most people were able to appreciate the funny side.
The incident has even taken some of the heat off of one of Microsoft’s older creations:
at least Clippy is off the hook for worst Microsoft bot
— Dave Gershgorn (@davegershgorn) March 24, 2016
Subscribe to the newsletter
Stay up to date and subscribe to our newsletter and receive media monitoring best practices, social data trends & exclusive case studies: