How Microsoft’s AI chatbot went from 0 to racist in less than 24 hours

See what happens when a bot has to learn through Twitter conversations.

Not Available Lead
Complex Original

Image via Complex Original

Not Available Lead

UPDATED 2:46 p.m. ET: According to Techcrunch, Microsoft has shut down its AI chatbot Tay after Twitter users taught it to be a racist tweeting machine.

Original story appears below:

Not unlike children, robots only know what they're taught. So what happens when a chatbot designed to learn through conversation goes on Twitter? It becomes a little bit of a racist troll, of course. 

Microsoft launched its AI Twitter bot Tay on Wednesday, which tweeted nearly 100,000 times in one day—though most of its tweets have now been deleted. Tay was designed to speak with American millennials aged 18 to 24 and become more intelligent through conversation, according to its website.

The AI was meant to engage in casual and playful conversations while tracking the users it interacts with.

Unfortunately, things soured rather quickly. Within 24 hours of its launch, one Twitter user pointed out that "Tay went from 'humans are super cool' to full Nazi."

"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A

One of Tay's features is following directions, and this turned out to be a major downfall. If someone told the bot "repeat after me," the AI would parrot the words back verbatim. This meant Tay was fodder for many trolls.

Some now-deleted tweets featured Tay endorsing genocide:

Tay also responded to direct messages, with one user asking the bot its thoughts on abortion and domestic violence:

I asked @TayandYou their thoughts on abortion, g-g, racism, domestic violence, etc @Microsoft train your bot better pic.twitter.com/6F6BIyCzA0

While Tay's most controversial tweets have been deleted, it seems like Microsoft is tweaking how much Tay learns from other people. Currently, the bot is "asleep." 

c u soon humans need sleep now so many conversations today thx💖

Tay said many problematic things, but the AI was a direct reflection of the people it interacted with. Let's hope that when robots do actually take over the world, we don't teach them how to be racist.

Latest in Life