How The Brain Retunes To Extract Language From Garbled Speech

Published

When you are suddenly able to understand someone despite a thick accent, or finally make out the lyrics of a song, your brain seems to be re-tuning to pick out words that were previously incomprehensible.

Now, neuroscientists at University of California, Berkeley have observed this re-tuning in action by recording directly from the surface of a person’s brain as the words of a previously unintelligible sentence suddenly pop out after the subject is told the meaning of the garbled speech. The re-tuning takes place within a second or less, they found.

The findings, published in Nature Communications,  confirm hypotheses that neurons in the auditory cortex that pick out aspects of sound associated with language, the components of pitch, amplitude and timing that distinguish words or smaller sound bits called phonemes, continually tune themselves to pull meaning out of a noisy environment.

Signals Pop Out

First author and UC Berkeley graduate student Chris Holdgraf, said:

“The tuning that we measured when we replayed the garbled speech emphasizes features that are present in speech. We believe that this tuning shift is what helps you ‘hear’ the speech in that noisy signal. The speech sounds actually pop out from the signal.”

Such pop-outs happen all the time: when you learn to hear the words of a foreign language, for example, or latch onto a friend’s conversation in a noisy bar. Or visually, when someone points out a number in what seems like a jumbled mass of colored dots, and somehow you cannot un-see that number.

Co-author Frédéric Theunissen, a UC Berkeley professor of psychology and a member of the Helen Wills Neuroscience Institute, said:

“Something is changing in the auditory cortex to emphasize anything that might be speech-like, and increasing the gain for those features, so that I actually hear that sound in the noise. It’s not like I am generating those words in my head. I really have the feeling of hearing the words in the noise with this pop-out phenomenon. It is such a mystery.”

Co-author Robert Knight, a UC Berkeley professor of psychology and Helen Wills Institute researcher, added:

“It is unbelievable how fast and plastic the brain is. In seconds or less, the electrical activity in the brain changes its response properties to pull out linguistic information. Behaviorally, this is a classic phenomenon, but this is the first time we have any evidence on how it actually works in humans.”

Brain Priming

Working with epilepsy patients who had pieces of their skull removed and electrodes placed on the brain surface to track seizures, known as electrocorticography, Holdgraf presented seven subjects with a simple auditory test.

He first played a highly garbled sentence, which almost no one initially understood. He then played a normal, easy to understand version of the sentence, and then immediately repeated the garbled version.

Almost everyone understood the sentence the second time around, even though they initially found it unintelligible.

The electrodes on the brain surface recorded major changes in neuronal activity before and after. When the garbled sentence was first played, activity in the auditory cortex as measured by the 468 electrodes was small. The brain could hear the sound, but couldn’t do much with it, Knight said.

When the clear sentence was played, the electrodes, as expected, recorded a pattern of neural activity consistent with the brain tuning into language. When the garbled sentence was played a second time, the electrodes recorded nearly the same language-appropriate neural activity, as if the underlying neurons had re-tuned to pick out words or parts of words.

“They respond as if they were hearing unfiltered normal speech,” Holdgraf said. “It changes the pattern of activity in the brain such that there is information there that wasn’t there before. That information is this unfiltered speech.”

The findings will aid Knight and his colleagues in their quest to develop a speech decoder: a device implanted in the brain that would interpret people’s imagined speech and help speechless patients, such as those paralyzed by Lou Gehrig’s disease, communicate.

“Normal language activates tuning properties that are related to extraction of meaning and phonemes in the language,” Knight said. “Here, after you primed the brain with the unscrambled sentence, the tuning to the scrambled speech looked like the tuning to language, which allows the brain to extract meaning out of noise.

Too Much Information

This trick is a testament to the brain’s ability to automatically pick and choose information from a noisy and overwhelming environment, focusing only on what’s relevant to a situation and discarding the rest.

“Your brain tries to get around the problem of too much information by making assumptions about the world,” Holdgraf said. “It says, ‘I am going to restrict the many possible things I could pull out from an auditory stimulus so that I don’t have to do a lot of processing.’ By doing that, it is faster and expends less energy.”

That means, though, that noisy or garbled sound can be hard to interpret. Holdgraf and his colleagues showed how quickly the brain can be primed to tune in language.

The neurons from which they recorded activity were not tuned to a single frequency, like a radio, Theunissen said.

Rather, neurons in the upper levels of the auditory cortex respond to more complex aspects of sound, such as changes in frequency and amplitude – spectro-temporal modulation that we perceive as pitch, timbre and rhythm. While similar studies in animals, such as ferrets, have shown that neurons change how they filter or tune into a specific type of spectro-temporal modulation, the new results are the first in humans, and show a more rapid shift to process human language than has been seen in animals, he said.

This is the first time the technique has been applied to humans to study how receptive fields change in neurons in the auditory cortex.

Christopher R. Holdgraf, Wendy de Heer, Brian Pasley, Jochem Rieger, Nathan Crone, Jack J. Lin, Robert T. Knight & Frédéric E. Theunissen
Rapid tuning shifts in human auditory cortex enhance speech intelligibility
Nature Communications 7, Article number: 13654 (2016) doi:10.1038/ncomms13654

Last Updated on September 15, 2023