Newswise — HBP researchers have trained a large-scale model of the primary visual cortex of the mouse to solve visual tasks in a highly robust way. The model provides the basis for a new generation of neural network models. Due to their versatility and energy-efficient processing, these models can contribute to advances in neuromorphic computing.

Modeling the brain can have a massive impact on artificial intelligence (AI): since the brain processes images in a much more energy-efficient way than artificial networks, scientists take inspiration from neuroscience to create neural networks that function similarly to the biological ones to significantly save energy.

In that sense, brain-inspired neural networks are likely to have an impact on future technology, by serving as blueprints for visual processing in more energy-efficient neuromorphic hardware. Now, a study by Human Brain Project (HBP) researchers from the Graz University of Technology (Austria) showed how a large data-based model can reproduce a number of the brain’s visual processing capabilities in a versatile and accurate way. The results were published in the journal Science Advances.

With the help of the PCP Pilot Systems at the Jülich Supercomputing Centre, developed in a collaboration between the HBP and the software company Nvidia, the team analysed a biologically detailed large-scale model of the mouse primary visual cortex that can solve multiple visual processing tasks. This model provides the largest integration of anatomical detail and neurophysiological data currently available for the visual cortex area V1, which is the first cortical region to receive and process visual information.

The model is built with a different architecture than those of deep neural networks used in current AI, and the researchers found out that it has interesting advantages regarding learning speed and visual processing performance over models that are commonly used for visual processing in AI.

The model was able to solve all five visual tasks presented by the team with high accuracy. For instance, these tasks involved classifying images of hand-written numbers or detecting visual changes in a long sequence of images. Strikingly, the virtual model achieved the same high performance as the brain even when the researchers subjected the model to noise in the images and in the network that it had not encountered during training.

One reason for the superior robustness of the model – or its ability to cope with errors or unexpected input, such as the noise in the images – is that it reproduces several characteristic coding properties of the brain.

Having developed a unique tool for studying brain-style visual processing and neural coding, the authors describe their new model as providing an “unprecedented window into the dynamics of this brain area”.

Journal Link: Science Advances