Newswise — In the sci-fi movie The Matrix, a cable running from a computer into Neo's brain writes in visual perceptions, and the brain reads out instructions, such as when to whirl his long trench coat. In reality, scientists cannot interact directly with the brain because they do not understand enough about how it codes and decodes information. Now, neuroscientists in the McGovern Institute at MIT have been able to read out a part of the visual system's code involved in recognizing visual objects. The study by Chou Hung, Gabriel Kreiman, Tomaso Poggio, and James DiCarlo at the McGovern Institute at MIT appeared in the November 4 issue of Science.

"We want to know how the brain works to create intelligence," Poggio explains. "Our ability to recognize objects in the visual world is among the most complex problems the brain must solve. Computationally, it is much harder than reasoning." Yet we take it for granted because it appears to happen automatically and almost unconsciously.

In a fraction of a second, visual input about an object runs from the retina through increasingly higher levels of the visual stream, continuously reformatting the information until it reaches the highest purely visual level, the inferotemporal cortex (IT). The IT cortex provides the key information for identifying and categorizing to other brain regions such as prefrontal cortex.

To explore how the IT cortex represents that information, the researchers trained monkeys to recognize different objects grouped into categories, such as faces, toys, and vehicles. The images appeared in different sizes and positions in the visual field. Recording the activity of hundreds of IT neurons produced a database of IT neural patterns in response to each object.

Then, the researchers used a computer algorithm, called a classifier, to decipher the code. The classifier was first trained to associate each object " say, a monkey's face -- with a particular pattern of neural signals. Then the trained classifier could be used to effectively decode novel neural activity patterns. Remarkably, the classifier found that even just a split second's worth of the neural signal contained specific enough information to identify and categorize the object, even at positions and sizes the classifier not previously "seen."

It was quite surprising that so few IT neurons (several hundred out of millions) for such a short period time contained so much precise information. "If we could record a larger population of neurons simultaneously," Poggio says, "we might find even more robust codes hidden in the neural patterns and extract even fuller information."

"This work enhances our understanding of how the brain encodes visual information in a useful format for brain regions involved in action, planning, and memory," says DiCarlo. Figuring out how the brain encodes visual input is helping the development of more clever computer algorithms for artificial visual systems for applications such as airport security scanners or an automobile's pedestrian alert system.

DARPA, the Office for Naval Research, and the NIH funded this research, which was conducted at the McGovern Institute at MIT.

About the McGovern Institute at MIT

The McGovern Institute at MIT is a research and teaching institute committed to advancing human understanding and communications. Led by a team of world-renowned, multi-disciplinary scientists, The McGovern Institute was established in February 2000 by Lore Harp McGovern and Patrick McGovern to meet one of the great challenges of modern science - the development of a deep understanding of thought and emotion in terms of their realization in the human brain. Additional information is available at: http://web.mit.edu/mcgovern/