New study digs deep into neural networks to improve AI security

Business


A new peer-reviewed study from the University of Tokyo reports an advance in defending artificial intelligence (AI) systems from malicious attacks. Neural networks, the systems of algorithms that use pattern recognition to analyze data inputs, can be confused by the introduction of ‘noise’ into the data they are meant to analyze. A driverless car that uses AI to identify road signs, for example, could be tricked into reading a stop sign as indicating right-of-way. 

In the past, attempts to stave off these disruptions have been limited to the topmost layers of these programs, out of concern that deeper modifications to the programs’ source code would ruin the program’s effectiveness. These researchers, however, succeeded at boosting their program’s sophistication without compromising the system’s success.

Like neural networks themselves, this new advance was inspired by natural processes found in the human brain. The two researchers — Kenichi Ohki, a Professor of Physiology at the University of Tokyo Graduate School of Medicine, and Jumpei Ukita, a recent graduate — brought to bear their own background in neuroscience when designing the defense. 

Blood veins and artificial intelligence (illustrative) (credit: PXFUEL)

Researchers: “attack and defense are two sides of the same coin”

Just as human systems build resilience by exposure to would-be attackers, “a typical defense for such an attack,” Ukita explained, “might be to deliberately introduce some noise into this first layer. This sounds counterintuitive that it might help, but by doing so, it allows for greater adaptations.”

But whereas most of these ‘inoculations’ take place in the first, ‘highest’ layers of a neural network, “we injected random noise into the deeper hidden layers of the network,” Ukita said, “to boost their adaptability and therefore defensive capability. We are happy to report it works.”

Like most advances in security, however, the game of cat and mouse will continue: “Future attackers might try to consider attacks that can escape the feature-space noise we considered in this research,” said Ukita. “Indeed, attack and defense are two sides of the same coin; it’s an arms race that neither side will back down from, so we need to continually iterate, improve and innovate new ideas in order to protect the systems we use every day.” 







Source link

Leave a Reply

Your email address will not be published. Required fields are marked *