Neural Networks

New Method Developed to Improve Reliability of Neural Networks in Inverse Imaging Problems

by

Researchers at the University of California, Los Angeles, have introduced a new method for addressing the reliability challenges faced by deep neural networks in solving inverse imaging problems. This method, which utilizes cycle consistency, aims to enhance the overall reliability and accuracy of the neural networks.

Inverse imaging problems involve creating an ideal image using raw image data, often after some form of degradation or distortion. Although deep neural networks have been successful in solving these problems, they can occasionally produce unreliable results, which can have significant consequences in certain contexts. To improve the reliability of these networks, it is crucial to accurately estimate the uncertainty associated with their predictions.

The research team led by Aydogan Ozcan developed a method of uncertainty quantification that combines a physical forward model with a neural network. By executing forward-backward cycles between the input and output data, the uncertainty accumulates, making it possible to effectively estimate the reliability of the network’s output.

The method is based on the concept of cycle consistency, which measures the difference between adjacent outputs in the cycle. The researchers derived upper and lower bounds for cycle consistency, establishing its relationship with the uncertainty of the neural network’s output. The study accounted for scenarios where cycle outputs diverged and where they converged, providing expressions for both cases. These bounds enable uncertainty estimation without requiring knowledge of the ground truth.

In order to demonstrate the effectiveness of their method, the researchers focused on two specific inverse imaging problems: image deblurring and image super-resolution. For image deblurring, they utilized a pretrained image-deblurring network to deblur noise-corrupted and uncorrupted blurry images. By incorporating their cycle consistency metrics into the process, they found that the estimation of network uncertainty and bias improved the accuracy of the final classification.

The researchers also extended their method to image super-resolution problems, using three types of low-resolution images – anime, microscopy, and face images. They trained three super-resolution neural networks, one for each image type, and tested their performance using forward-backward cycles. The results showed that the method accurately detected out-of-distribution cases, where the neural network was presented with images it was not trained for.

The team believes that their cycle-consistency-based uncertainty quantification method will greatly contribute to enhancing the reliability of neural network inferences in inverse imaging problems. Furthermore, this method could have applications in uncertainty-guided learning. By addressing the challenges associated with uncertainty in neural network predictions, this study is a significant advancement towards the more reliable and confident deployment of deep learning models in critical real-world applications.

*Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it