Bursts in a neuron’s electrical activity, the amount of “spikes” that result when brain cells fire, make up the basic code for perception, according to traditional thought. But neurons constantly speed up and slow down their signals.
Despite these hurdles, bionic eyes, aka visual prosthetics, could soon become a reality, as researchers make strides in strategies to reactivate parts of the brain that process visual information in people affected by blindness.
Now, a new study by Salk Institute scientists shows that being able to see the world relies on not just the number of spikes over a window of time but the timing of those spikes as well.
Neuron Activity Patterns
Salk Professor John Reynolds, the study’s senior investigator and holder of the Fiona and Sanjay Jha Chair in Neuroscience, said:
“In vision, it turns out there’s a huge amount of information present in the patterns of neuron activity over time. Increased computing power and new theoretical advances have now enabled us to begin to explore these patterns.”
The human brain houses an extensive network of neurons that are responsible for seeing everything from simple shapes, with certain groups of neurons getting excited by a horizontal or a vertical edge, for example, to intricate stimuli, such as faces or specific places.
Reynolds’ team focused on a visual brain area called V4, located in the middle of the brain’s visual system that recognizes contours. Neurons in V4 are sensitive to the contours that define the boundaries of objects and help us recognize a shape regardless of where it is in space.
But Reynolds and postdoctoral researcher Anirvan Nandy discovered in 2013 that V4 was more complicated.
Some neurons in the area only care about contours within a designated spot in the visual field.
Ideal Observer Visual Recognition
Those findings led the team to wonder whether the activity code of V4 could be even more nuanced, taking in visual information not only in space but also in time.
“We don’t see the world around us as if we are looking at a series of photographs. We live – and see – in real time and our neurons capture that,”
says Nandy, lead author of the new paper.
The scientists collaborated with Salk theoretician and postdoctoral researcher Monika Jadi to create in computer code what they called an “ideal observer.” With access to only the brain data, the computer would decipher, or at least guess, the moving pictures that had been seen.
One version of the ideal observer had access to the number of times the neurons fired, whereas the other version had access to the full timing of the spikes. Indeed, the latter observer was able to guess the images more than twice as accurately compared with the more basic observer.
Better ways to record from and stimulate the brain, and better theoretical modeling efforts, have enabled these new findings.
Now the group plans to not only observe V4 but to activate it using light through a cutting-edge technique called optogenetics. This, says Reynolds, is like taking the visual system for a spin.
It will help them better understand the relationship between patterns of neuron activity and how the brain perceives the world, potentially laying the groundwork for more advanced visual recognition prosthetics.
Anirvan S. Nandy, Jude F. Mitchell, Monika P. Jadi, John H. Reynolds
Neurons in Macaque Area V4 Are Tuned for Complex Spatio-Temporal Patterns
Neuron, 2016; DOI: 10.1016/j.neuron.2016.07.026
Last Updated on November 10, 2022