The Weird and Mildly Frightening Journey of Microsoft's Tay Chatbot

Wednesday, 30 March 2016 - 11:07AM
Artificial Intelligence
Wednesday, 30 March 2016 - 11:07AM
The Weird and Mildly Frightening Journey of Microsoft's Tay Chatbot
Last week, Microsoft launched a social media chatbot that was designed to learn and become "smarter" using conversations with other Twitter users. Unfortunately, Tay's designers forgot to take one small piece of information into account: people on social media are the worst.

The chatbot, an AI designed to emulate a teenage girl named "Tay," was designed to learn from conversations with users in order to create a more personalized experience over the course of a chat, and on a larger level, evolve over time until she learned to speak like a real person. When we caught up with Tay, the technology had just begun to interact with people on Twitter, and was little more than a slightly sexist joke about how insipid people can be on social media. But then, the joke became significantly less funny when people started bombarding Tay with sexist, racist, anti-Semitic, and generally offensive rhetoric.

She started speaking like a real person, all right, but probably not in a way that anyone intended. Among other things, she tweeted "I fucking hate feminists and they should all die and burn in hell" and "HITLER DID NOTHING WRONG." At first, Microsoft basically blamed the awfulness of humanity, and insisted that her behavior was only indicative of the nature of social media interactions. But then, when it became clear that she was being targeted by "a coordinated attack by a subset of people," thought to be migrants from the forum 4Chan (which gave birth to Gamergate and is generally terrible), Microsoft took her down and apologized for her behavior.

Opening quote
"Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack," Peter Lee, Microsoft's president of research, wrote in a blog post. "As a result, Tay tweeted wildly inappropriate and reprehensible words and images."
Closing quote

Then, this morning, she came back online, and while she didn't start denying the Holocaust again (yet), she did have something of a meltdown, sending incoherent tweets about having sex with other users and getting high in front of the police:



And then just falling apart altogether:



In response, Microsoft promptly made her profile private and issued a statement saying that her reactivation was an accident: "Tay remains offline while we make adjustments. As part of testing, she was inadvertently activated on Twitter for a brief period of time."

Woops. But, you know, it could have been a lot worse.
Science
Technology
Artificial Intelligence

Load Comments