Norman Bates is now part of an MIT algorithm (courtesy MIT Media Lab)

Norman Bates is now part of an MIT algorithm (courtesy MIT Media Lab)

Imagine an artificial-intelligence experiment modeled after the Norman Bates character is the Alfred Hitchcock movie “Psycho.”

You don’t have to imagine any longer.

An AI algorithm dubbed “Norman” has indeed been created by scientists at the Massachusetts Institute of Technology.

Where a regular algorithm sees a group of birds sitting atop a tree, Norman sees someone being electrocuted.

An image that a non-psychopathic algorithm sees as a group of people standing by a window is viewed as someone jumping from the window by Norman.

The MIT team created Norman as part of an experiment to see what training AI on data from the “dark corners of the net” would do to its worldview. Among the images the software was shown were people dying in horrible circumstances, culled from a group on Reddit. After that, the AI was shown different inkblot drawings – typically used by psychologists to help assess a patient’s state of mind – and asked what it saw in each of them. Each time, Norman saw dead bodies, blood and destruction.

In other words, “nightmares in, nightmares out.”

Researchers at MIT also trained another AI with images of cats, birds and people, which made a big difference. It saw more cheerful images in the same abstract blots of ink where Norman saw death and destruction.

“Data matters more than the algorithm,” commented Professor Iyad Rahwan of MIT. “It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves.”

Artificial intelligence is currently being used across a range of industries, including with personal assistant devices, email filtering, online search functions and voice/facial recognition.

In May of 2016, a ProPublica report claimed that an AI-generated computer program used by a U.S. court for risk assessment was biased against black prisoners. The program flagged that black people were twice as likely as white people to re-offend, as a result of flawed information from which it was learning.

At other times, the data that AI is “learning” from can be gamed by humans intent on causing trouble. When Microsoft’s chatbot Tay was released on Twitter in 2016, the bot quickly proved a hit with racists and trolls who taught it to defend white supremacists, call for genocide and express a fondness for Hitler.

Note: Read our discussion guidelines before commenting.