If you ask automakers, the driverless car revolution is hurdling toward mass markets. But the artificial intelligence autonomous vehicles use remains flawed, tripped up simply by large animals, bicyclists and, now, slightly altered road signs.
Researchers found a slight modification of a street sign could lead to a driverless car misinterpreting its meaning, leading to potential danger. For example, Google found altering an image just 4 percent could fool AI into thinking it’s a different object 97 percent of the time. Tuesday, the nonprofit research company OpenAI brought the image manipulation test to driverless cars.
Quartz reports:
The research, whipped up in the five days since the Illinois paper was published, shows a printed picture of a kitten fooling image-recognition AI into thinking it’s a picture of a “monitor” or “desktop computer” from a number of angles, and as the picture gets closer and farther.
“When this paper claimed a simple fix, we were curious to reproduce the results for ourselves (and we did, in a sense). Then, we were curious if a determined attacker could break it, and we found that it was possible,” OpenAI researcher Anish Athalye told Quartz.
Automakers might also have much simpler problems to fix before they can tackle adversarial examples. It’s entirely possible that a black marker and some poster board might be just as effective as a maliciously-crafted machine-learning attack—a Carnegie Mellon professor has documented how his Tesla mistook a highway junction sign as a 105 mile-per-hour speed limit.
Though car makers keep funneling money into making their own self-driving car and the government is rushing to take control of their deployment, we must slow down to fully understand the technology’s shortcomings before putting this tech on the market. If not, whatever safety benefits the AI may provide may be negated.