Computers can be taught to process incoming data, like observing faces and cars, using artificial intelligence (AI) known as deep neural networks or deep learning. This type of machine learning process uses interconnected nodes or neurons in a layered structure that resembles the human brain. While computers may be able to spot a familiar face or an oncoming vehicle faster than the human brain, their accuracy is questionable.
The keyword is “resembles” as computers, despite the power and promise of deep learning, have yet to master human calculations and crucially, the communication and connection found between the body and the brain, specifically when it comes to visual recognition. While promising, deep neural networks are far from being perfect computational models of human vision.
Previous studies have shown that deep learning cannot perfectly reproduce human visual recognition, but few have attempted to establish which aspects of human vision deep learning fails to emulate.
As of recent studies have shown that deep neural networks cannot fully account for neural responses measured in human observers while individuals are viewing photos of objects, including faces and animals, and has major implications for the use of deep learning models in real-world settings, such as self-driving vehicles.
The discovery done, provides clues about what neural networks are failing to understand in images, namely visual features that are indicative of ecologically relevant object categories such as faces and animals. The neural networks can be improved as models of the brain by giving them a more human-like learning experience, like a training regime that more strongly emphasizes behavioral pressures that humans are subjected to during development.
It is important for humans to quickly identify whether an object is an approaching animal or not, and if so, to predict its next consequential move. Integrating these pressures during training may benefit the ability of deep learning approaches to model human vision.