FMRI Study Supports Predictive Coding Theories Of Vision

Published

A better understanding of visual mechanisms, and how seeing is a constant two-way dialogue between the brain and the eyes, comes from a new study by neuroscientists at the University of Glasgow. Using functional magnetic resonance imaging, they have shown how the human brain is able to predict what our eyes will see next.

The researchers, led by Professor Lars Muckli of the University of Glasgow, used a visual illusion involving two stationary flashing squares. The squares look, to the observer, like one square moving between the two locations because the brain predicts motion.

During these flashes, the authors instructed participants to move their eyes. The researchers imaged the visual cortex and found that the prediction of motion updated to a new spatial position in the cortex with the eye movement.

Saccade Prediction

We move our eyes approximately 4 times per second, meaning our brains have to process new visual information every 250 milliseconds. This quick, simultaneous movement of both eyes between two or more phases of fixation in the same direction is called a saccade.

Nevertheless, the world appears stable. If you were to move your video camera so frequently, the film would appear jumpy. The reason we still perceive the world as stable is because our brains think ahead.

In other words, the brain predicts what it is going to see after you have moved your eyes.

Predictive Coding

Predictive coding theories of vision propose that higher cortical areas use internal models of the world to predict sensory inputs. Cortical feedback carries these predictions back to the primary visual cortex (V1), where a neural mechanism compares them to the actual sensory inputs.

One fundamental assumption of predictive coding, however, remained untested. Humans saccade approximately three times per second, changing the retinal pattern of sensory inputs to V1. Therefore, for cortical predictive feedback to be functional, it must update to new retinotopic locations in V1 in time to meet post-saccadic input.

Key to this study is the creation of an internal model in the brain, during which feedback carries sensory predictions from higher areas down to V1. The apparent motion illusion offered a paradigm for investigating such a model.

fMRI Potentials

Professor Lars Muckli, of the Institute of Neuroscience & Psychology, said:

“This study is important because it demonstrates how fMRI can contribute to this area of neuroscience research. Further to that, finding a feasible mechanism for brain function will contribute to brain-inspired computing and artificial intelligence, as well as aid our investigation into mental disorders.”

The study also reveals the potential for fMRI to contribute to this area of neuroscience research, as the authors are able to detect a difference in processing of only 32ms, much faster than is typically thought possible with fMRI.

“Visual information is received from the eyes and processed by the visual system in the brain. We call visual information “feedforward” input. At the same time, the brain also sends information to the visual system, this information is called “feedback”.

Feedback information influences our perception of the feedforward input using expectations based on our memories of similar perceptual events. Feedforward and feedback information interact with one another to produce the visual scenes we perceive every day,”

scientist Dr Gracie Edwards said.

Saccades are one of the fastest movements produced by the human body, although blinks may reach even higher peak velocities. Saccades to an unexpected stimulus normally take about 200 milliseconds (ms) to initiate, and then last from about 20–200 ms, depending on their amplitude (20–30 ms is typical in language reading).

The work was funded by grants from the Biotechnology and Biological Sciences Research Council, the European Research Council, and a Human Brain Project grant from the European Union’s Horizon 2020 Research and Innovation Program.

Grace Edwards, Petra Vetter, Fiona McGruer, Lucy S. Petro & Lars Muckli
Predictive feedback to V1 dynamically updates with sensory input
Scientific Reports 7, Article number: 16538 (2017) doi:10.1038/s41598-017-16093-y

Last Updated on November 11, 2022