Towards an Understanding of Neural Networks in Natural-Image Spaces

01/27/2018
by   Yifei Fan, et al.
0

Two major uncertainties, dataset bias and perturbation, prevail in state-of-the-art AI algorithms with deep neural networks. In this paper, we present an intuitive explanation for these issues as well as an interpretation of the performance of deep networks in a natural-image space. The explanation consists of two parts: the philosophy of neural networks and a hypothetic model of natural-image spaces. Following the explanation, we slightly improve the accuracy of a CIFAR-10 classifier by introducing an additional "random-noise" category during training. We hope this paper will stimulate discussion in the community regarding the topological and geometric properties of natural-image spaces to which deep networks are applied.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro