Verbal Working Memory Manipulation Uses Multiple Brain Networks

Published

The neural structure our brains use for storing and processing information in verbal working memory is more complicated than previously understood, a new study by researchers at New York University suggests.

The work shows that processing information in working memory involves two different networks in the brain rather than one. The discovery has implications for the creation of artificial intelligence (AI) systems, such as speech translation tools.

Previous studies had focused on a single “Central Executive” for overseeing manipulations of information stored in working memory. The distinction is an important one, senior author Bijan Pesaran points out, since current AI systems that replicate human speech typically assume computations involved in verbal working memory are performed by a single neural network.

Multiple Working Memory Networks

Pesaran, an associate professor at New York University’s Center for Neural Science, explains:

“Our results show there are at least two brain networks that are active when we are manipulating speech and language information in our minds. Artificial intelligence is gradually becoming more human like. By better understanding intelligence in the human brain, we can suggest ways to improve AI systems. Our work indicates that AI systems with multiple working memory networks are needed."

The study investigated a form of working memory critical for thinking, planning, and creative reasoning and involves holding in mind and transforming the information necessary for speech and language.

The researchers examined human patients undergoing brain monitoring to treat drug-resistant epilepsy. Specifically, they decoded neural activity recorded from the surface of the brain of these patients as they were listening to speech sounds and speaking after a short delay.

This method required the study’s subjects to use a rule provided by the researchers to transform speech sounds they heard into different spoken utterances. For example, the patients were told to repeat the same sound they had heard while at other times the researchers instructed the patients to listen to the sound and make a different utterance.

More Intelligent Systems

The researchers decoded the neural activity in each patient’s brain as the patients applied the rule to convert what they heard into what they needed to say. The results revealed that manipulating information held in working memory involved the operation of two brain networks.

One network encoded the rule that the patients were using to guide the utterances they made (the rule network). Surprisingly, however, the rule network did not encode the details of how the subjects converted what they heard into what they said.

The process of using the rule to transform the sounds into speech was handled by a second, transformation network. Activity in this network could be used to track how the input (what was heard) was being converted into the output (what was spoken) moment-by-moment.

“One way we can enhance the development of more intelligent systems is with a fuller understanding of how the human brain and mind works,” notes Pesaran. “Diagnosing and treating working memory impairments in people involves psychological assessments. By analogy, machine psychology may one day be useful for diagnosing and treating impairments in the intelligence of our machines. This research examines a uniquely human form of intelligence, verbal working memory, and suggests new ways to make machines more intelligent."

Translating what you hear in one language to speak in another language involves applying a similar set of abstract rules. People with impairments of verbal working memory find it difficult to learn new languages. Modern intelligent machines also have trouble learning languages, the researchers add.

Gregory B Cogan, Asha Iyer, Lucia Melloni, Thomas Thesen, Daniel Friedman, Werner Doyle, Orrin Devinsky & Bijan Pesaran
Manipulating stored phonological input during verbal working memory
Nature Neuroscience (2016) doi:10.1038/nn.4459


Last Updated on November 10, 2022