MIT Creates 'World's First Psychopath AI' by Only Feeding It Data From Reddit

MIT's new AI Norman was fed data from a disturbing subreddit and now can only think of death and destruction where standard AIs see much more normal things.

In case you needed more evidence that our collective online culture is toxic, Massachusetts Institute of Technology researchers have come through for you. Their newest artificial intelligence creation, named Norman, has been deliberately from “the darkest corners of Reddit,” and now all it thinks about is murder. It apparently wasn’t enough to name him after the creepy protagonist in Hitchcock’s Psycho, they had to go and create the “world’s first psychopath AI.”

In order to test Norman’s psychological status after his Reddit binge, the researchers used Rorschach inkblots, which they claim “is used to detect underlying thought disorders.” Norman consistently saw horrifying and violent images in 10 different inkblots where a standard AI saw much more benign images. 

For example: a standard AI saw a “black and white photo of a small bird” where Norman saw a “man gets puled into a dough machine.” Similarly, a standard AI saw a “photo of a baseball glove” in the same inkblot where Norman saw a “man murdered by machine gun in broad daylight.” In another, standard AI saw a “person holding an umbrella in the air” and Norman saw a “man shot dead in front of his screaming wife.” 

There is, of course, a larger point to this experiment. The MIT researchers were trying to prove the point that “the data that is used to teach a machine learning algorithm can significantly influence its behavior,” and therefore, if you’ll use it to make any important decisions, the data you feed it matters. “When people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it,” the researchers wrote. 

As The Verge notes, Norman is only the extreme version of something that could have equally horrifying effects, but be much easier to imagine happening: “What if you’re not white and a piece of software predicts you’ll commit a crime because of that?”

There’s a world of research and debate wrapped up in that question and the ethical and legal implications of artificial intelligence at large, but between Norman, Sophia the Robot, Alexa’s creepy laughter, and Google’s potentially scary AI-powered drone research, there’s one thing I’m certain about: Our tech will definitely kill us all eventually.  

Latest in Life