A new report at Wired is raising the specter of a huge problem that has appeared in Artificial Intelligence programs and projects.
It hallucinates.
Advertisement - story continues below
The systems, according to the report, can see things that aren't there.
"That could be a big problem for products dependent on machine learning, particularly for vision, such as self-driving cars. Leading researchers are trying to develop defenses against such attacks – but that's proving to be a challenge," the report said.
TRENDING: Elderly pro-life men 'viciously attacked' while praying outside Planned Parenthood
Wired explained that a machine-learning conference had announced 11 papers would be presented in April on ways to detect software that reads something that isn't there.
"MIT grad student Anish Athalye threw up a webpage claiming to have 'broken' seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford," the report said.
Advertisement - story continues below
"A creative attacker can still get around all these defenses," Athalye, who worked on the project with Nicholas Carlini and David Wagner, a grad student and professor, respectively, at Berkeley, told Wired.
"All these systems are vulnerable," Battista Biggio, an assistant professor at the University of Cagliari, Italy, told Wired.
The publication presented an image of two men on skis.
But Google's Cloud Vision said it was 91 percent that was a dog.
Other results included "fun," "snow" and "ice."
Advertisement - story continues below
"So far, such attacks have been demonstrated only in lab experiments, not observed on streets or in homes. But they still need to be taken seriously now, says Bo Li, a postdoctoral researcher at Berkeley. The vision systems of autonomous vehicles, voice assistants able to spend money, and machine learning systems filtering unsavory content online all need to be trustworthy," Wired reported.
"This is potentially very dangerous," Lis said.
Yang Song, author of a Stanford study, didn't comment on the concerns, Wired said.
And Zachary Lipton, a Carnegie Mellon prof who helped with another project, said it's "plausible" that existing defenses can be evaded.
Advertisement - story continues below
Google didn't comment.
Biggio said people trust machine learning, but they shouldn't
"The security mindset is exactly the opposite, you have to be always suspicious that something bad may happen."