This comes from Newsweek, which explains that the scientists exclusively fed Norman violent and gruesome content from an unnamed Reddit page before showing it a series of Rorschach inkblot tests. While a “standard” AI would interpret the images as, for example, “a black and white photo of a baseball glove,” Norman sees “man is murdered by machine gun in broad daylight.” If that sounds extreme, Norman’s responses get so, so, so, so much worse.

Seriously, it may just be an algorithm, but if they dumped this thing into one of those awful Boston Dynamics dog bodies, we would only have a matter of minutes before Killbots and Murderoids started trampling our skulls. Here are some examples from the study: Seriously, if “man gets pulled into dough machine” doesn’t give you chills, then you might need to start wondering if the machines have already assimilated you.

Also, for the record, the study says that Norman wasn’t actually given any photos of real people dying, it just used graphic image captions from the unnamed Reddit page (which is unnamed in the study because of its violent content). Thankfully, there was a purpose behind this madness beyond trying to expedite the destruction of humanity.

The MIT team—Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan—was actually trying to show how some AI algorithms aren’t necessarily inherently biased, but they can become biased based on the data they’re given. In other words, they didn’t build Norman as a “psychopath,” but it became a “psychopath” because all it knew about the world was what it learned from a Reddit page.

(That last bit seems like it should be particularly relevant for some people on the internet, but we’re going to assume that wasn’t the MIT team’s intention.) Read more from…

thumbnail courtesy of