SEP 21, 2022 10:00 AM PDT

Researchers Examine "Black Box" of Artificial Intelligence

Credit: Pixabay

In a recent paper presented at the Conference on Uncertainty in Artificial Intelligence, a team of researchers from the Los Alamos National Laboratory in New Mexico has developed a new method for comparing neural networks that observes what’s known as the “black box” of artificial intelligence (AI). This study holds the potential to assist researchers to better understand the behavior of neural networks, which can be found in everyday devices from self-driving cars to facial recognition systems to virtual assistants.

"The artificial intelligence research community doesn't necessarily have a complete understanding of what neural networks are doing; they give us good results, but we don't know how or why," said Haydn Jones, a researcher in the Advanced Research in Cyber Systems group at Los Alamos, and lead author of the study. "Our new method does a better job of comparing neural networks, which is a crucial step toward better understanding the mathematics behind AI." Along with examining network similarity, the study marks an important step towards distinguishing the behavior of robust neural networks.

While neural networks exhibit high performance, they are also fragile. As an example, neural networks within self-driving cars are used to detect signs, which they do well under ideal conditions. However, even the smallest aberration—a sticker on a stop sign—might cause the network to misidentify the sign and not stop the car.

To refine neural networks, the research team is examining ways to improve the network’s robustness, which includes “attacking” the networks during their training process. This is done by introducing aberrations and then training the AI to ignore them, a process dubbed adversarial training which fundamentally makes it more difficult to fool the networks.

"We found that when we train neural networks to be robust against adversarial attacks, they begin to do the same things," said Jones. "By finding that robust neural networks are similar to each other, we're making it easier to understand how robust AI might really work. We might even be uncovering hints as to how perception occurs in humans and other animals.”

Sources: Conference on Uncertainty in Artificial Intelligence

As always, keep doing science & keep looking up!

About the Author
Master's (MA/MS/Other)
Laurence Tognetti is a six-year USAF Veteran who earned both a BSc and MSc from the School of Earth and Space Exploration at Arizona State University. Laurence is extremely passionate about outer space and science communication, and is the author of "Outer Solar System Moons: Your Personal 3D Journey".
You May Also Like
Loading Comments...