Microsoft Apologizes for Offensive Tweets Made By Its 'Tay' Chatbot
Microsoft apologized for the racist and sexist Twitter messages generated by the 'Tay' chatbot it launched this week, a company official wrote on Friday. Microsoft created Tay as an experiment to learn more about how artificial intelligence programs can engage with Web users in casual conversation.
The bot was designed to become "smarter" as more users interacted with it. Instead, it quickly learned to parrot a slew of anti-Semitic and other hateful invective that human Twitter users started feeding the program, forcing Microsoft to shut it down on Thursday.
"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values," Peter Lee, Microsoft's vice president of research, said in a blog post.
Lee said that in the first 24 hours of coming online, "a coordinated attack by a subset of people" exploited a vulnerability in Tay. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. For example, Tay tweeted: "feminism is cancer," in response to another Twitter user who had posted the same message.
Microsoft is currently to address the specific vulnerability that was exposed by the attack on Tay.
"We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity," Lee wrote.