

#MICROSOFT CHATBOT RACIST FULL#
Not even a full day after her release, Microsoft disabled the bot from taking any more questions, presumably to iron out a few creases regarding Tay's political correctness. Tay, who is meant to converse like your average millennial, began the day like an excitable teen when she told someone she was "stoked" to meet them" and "humans are super cool".īut towards the end of her stint, she told a user "we're going to build a wall, and Mexico is going to pay for it".

Tay also shared a conspiracy theory surrounding 9/11 when she expressed her belief that "Bush did 9/11 and Hitler would have done a better job than the monkey we have now". In possibly her most shocking post, at one point Tay said she wished all black people could be put in a concentration camp and "be done with the lot". Online troublemakers interacted with Tay and led her to make incredibly racist and misogynistic comments, express herself as a Nazi sympathiser and even call for genocide. Tay, targeted at 18 to 24-year-olds in the US, has been designed to learn from each conversation she has - which sounds intuitive, but as Microsoft found out the hard way, also means Tay is very easy to manipulate. "We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity," Lee wrote.In less than 24 hours, Microsoft's new artificial intelligence (AI) chatbot "Tay" has been corrupted into a racist by social media users and quickly taken offline. However, Microsoft insisted the embarrassment won’t put the firm off further exploring AI for the purposes of entertainment. It was the positive experience with Xiaolce that prompted Microsoft to create Tay, to see whether the technology would also work for American teenagers. An earlier chat bot called XiaoIce, launched in China in 2014, has since developed a followership of 40 million. Tay is already the second experiment of this kind conducted by Microsoft. "As a result, Tay tweeted wildly inappropriate and reprehensible words and images." "Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack," Lee wrote. The bot also re-tweeted another user’s message stating that feminism is cancer.įollowing the setback, Microsoft said it would revive Tay only if its engineers could find a way to prevent web users from influencing the chat bot in ways that undermine the company's principles and values. Tay’s offences included answering another Twitter user’s question as to whether Holocaust did happen by saying ‘It was made up’, to which the bot added a handclap icon. "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Peter Lee, Microsoft's vice president of research, wrote in a blogpost. Microsoft shut down Tay’s Twitter account on Thursday night and apologised for the tirade. However, the experiment didn’t go as planned as users started feeding the programme anti-Semitic, sexist and other offensive content, which the bot happily absorbed.

Mimicking the language patterns of a 19-year-old American girl, the bot was designed to interact with human users on Twitter and learn from that interaction.
