A team of scientists are modeling a brain structure to help computers recognize shapes and objects as humans do.

While a humans’ visual performance gets worse when an image is shown for a shorter period of time and when shapes are more complicated, scientists are expecting computers to recognize shapes faster than humans.

After measuring human performance, researchers from Los Alamos National Laboratory, Chatham University, and Emory University created a computer model based on human neural structure to recognize shapes.

"This model is biologically inspired and relies on leveraging lateral connections between neurons in the same layer of a model of the human visual system," said Vadas Gintautas of Chatham University in Pittsburgh and formerly a researcher at Los Alamos.

Computer Simulation of Neural Network

Senior author Garrett Kenyon of Los Alamos explained to Medical Daily that the model is a computational model, specifically, “a computer simulation of a neural network whose connectivity and behavior is intended to represent the primary visual cortex.”

"These neurons, located in the inferotemporal cortex, can be strongly activated when particular objects are visible, regardless of how far away the objects are or how the objects are posed, a phenomenon referred to as viewpoint invariance," he said.

Gintautas said information from the model provides another way to approach object detection problems.

"Lateral connections have been generally overlooked in similar models designed to solve similar tasks. We demonstrated that our model qualitatively reproduces human performance on the same task, both in terms of time and difficulty. Although this is certainly no guarantee that the human visual system is using lateral interactions in the same way to solve this task, it does open up a new way to approach object detection problems," he said in the study published in Science (PLoS) Computational Biology journal.

The computer model reads images stored as files on disk, said Kenyon.

“These images are statistically identical to the images presented to the human psychophysics subjects.”

Kenyon said researchers are modeling the visual cortex to gain insight into how neural systems work and as a way of designing better computer vision algorithms.

"Our research represented the first example of a large-scale cortical model being used to account for both the overall accuracy, as well as the processing time, of human subjects performing a challenging visual-perception task," said Kenyon.