Tay, the chatbot from Microsoft which uses artificial intelligence to engage with millennials on Twitter, did not even last a day, all thanks to the misogynistic, racist and sexist comments by Twitter users, which Tay simply parroted back.
An artificial intelligent chat bot, Tay has been developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding, as written on Tay’s “about” page linked to the Twitter profile. According to the company statement, the more you chat with Tay, “the smarter it gets, learning to engage people through casual and playful conversation.”
Even though, Tay started by tweeting innocent tweets, but couldn’t do it for a long time. The account soon became a target for all kinds of hate speech, repeating anti-Semitic, misogynistic, racist, sexist, and Donald Trumpist tweets from other human twitter users. Since Tay is nothing but an unevolved artificial intelligence it starts repeating the tweets back to the users.
When Twitter user Room (@codeinecrazzy) tweeted “jews did 9/11” to the account on Wednesday, @TayandYou responded “Okay … jews did 9/11.”
At another time, Tay tweeted “feminism is cancer,” in response to another Twitter user who said the same.
In a statement provided by a Microsoft spokesperson to Business Insider said, “The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”
Before getting temporarily shut down, Tay’s last message was : “C u soon humans need sleep now so many conversations today thx.”