
MIT’s Norman, the psychopathic AI, was meant to prove what would happen if AI’s were to be fed the wrong info.
Ever imagine what would happen if someone was to unleash a psychopathic AI? We’re thinking that this is what a group of researchers was thinking before creating Norman, a lab-bred maniacal deep-learning AI with a penchant for death by electrocution and gruesome car accidents.
Norman, the Psychopathic AI, To Prove What Bias Can Do to Machines
Meet Norman, the first artificial intelligence designed for one purpose and one purpose only – to watch the world burn.
Let’s backtrack a bit. Long before Norman was set loose, his creators played a little game of ‘what if.’ – in this case, what if a psychopathic AI could control the Internet. With this in mind, the team created Norman, a deep-learning AI capable of providing a text-based description of any picture.
According to an MIT Media Lab researcher, the purpose of this ‘case study’ was to figure out what would happen if someone fed bias material to an AI and, even more important, what happens after that.
When Normal was ready, he was uploaded to subreddit which mostly contained disturbing pictures of people being killed or maimed. After that, it was tested with Rorschach inkblots. It would be an understatement to say that Norman’s answers were disturbing.
Here are some of the answers Norman provided after ‘looking at some inkblots.’ A none-psycho an AI described the first image as “a group of birds sitting on top of the tree branch.” The non-so-considerate AI saw “a man electrocuted to death.”
To list just a few of Norman’s answers, we have the “man killed by a speeding driver,” “the man shot in the head,” and “the man getting pulled into a dough machine.” Probably the most disturbing description Norman provided was “man is shot dead in front of his screaming wife.”
Conclusion
Is Norman the long, lost relative of HAL-9000? We don’t know for sure. What we do know is that AIs bred for evil are truly evil incarnate.
Image source: Wikipedia