News

New method shows how artificial intelligence works

Published

on

Researchers at Los Alamos National Laboratory have developed a new neural network comparison method that looks at the “black box” of artificial intelligence to help researchers understand the behavior of neural networks. Neural networks detect patterns in datasets and are used in applications as diverse as virtual assistants, facial recognition systems, and autonomous vehicles.

“The AI ​​research community does not necessarily have a complete understanding of what neural networks do; they give us good results, but we don’t know how or why,” said Haydn Jones, a researcher with the Cybernetic Systems Advanced Study Group at Los Alamos. “Our new method does a better job of comparing neural networks, which is an important step in better understanding the mathematics behind AI.”

Jones is the lead author of a recent paper presented at a conference on uncertainty in artificial intelligence. The article is an important step in describing the behavior of robust neural networks in addition to studying the similarity of networks.

Neural networks are high performance but fragile. For example, autonomous vehicles use neural networks to recognize signals. They are very good at doing this under ideal circumstances. The neural network, however, can mistakenly detect a signal and never stop if there is the slightest anomaly, like a sticker on a stop sign.
Therefore, to improve neural networks, researchers are looking for strategies to increase the reliability of the network. The modern method is to “attack” the networks as they are trained. AI is trained to ignore anomalies that are purposefully introduced by investigators. In essence, this technique, known as adversarial learning, makes it harder to fool networks.

With a surprising discovery, Jones and Los Alamos colleagues Jacob Springer and Garrett Kenyon, and Jones’ mentor Juston Moore, applied their new network similarity metric to trained adversarial neural networks. They found that as the severity of an attack increases, adversarial learning causes neural networks in computer vision to converge on very similar representations of the data, regardless of network architecture.

“We found that when we train neural networks to be resilient to attackers, they start to do the same,” Jones said.

There has been a lot of effort in industry and academia to find the “right architecture” for neural networks, but the findings of the Los Alamos team show that the introduction of antagonistic learning narrows down the search considerably. As a result, the AI ​​research community may not need to spend as much time learning new architectures, knowing that learning adversaries causes different architectures to converge on similar solutions.

“By discovering that robust neural networks are similar to each other, we make it easier to understand how robust AI can actually work. We may even find clues about how perception occurs in humans and other animals,” Jones said.

Reference: Haydn T. Jones, Jacob M. Springer, Garrett T. Kenyon, and Juston S. Moore, “If you trained one, you trained them all: Architecture similarity increases with reliability”, February 28, 2022, Conference on Unchttps://plato.stanford.edu/entries/artificial-intelligence/ertainty in artificial intelligence.

Click to comment

Trending

Exit mobile version