What If We Could Record And Rewind Our Thoughts?

Published

Scientific discoveries that involve humans interfacing with machines can evoke reactions of fear and wonder. Quite often, these feelings are epitomized through works of science fiction. Think Mary Shelly’s “Frankenstein,” for starters; or its modern day equivalent, one of many films playing on our mixed feelings toward AI, “Ex Machina.”

One British sci-fi TV series that captures the anxious mix of fear and wonder that accompanies machine-human interfaces is Black Mirror. One episode in this series, titled “The Entire History of You” features a piece of technology called “Grain”.

When implanted behind a human’s right ear, grain records a person’s first-hand visual and auditory experience and converts it into a chronological collection of watchable videos.

What if we could record every moment of every day from the time we were born by something that has been implanted in our brain? What if we could rewind and replay every single experience and every single social interaction that we have ever had?

Two avenues of current neurotech research, Brain Computer Interfaces (BCI) and brain decoding, promise something close to the output of Black Mirror’s grain.

The Movie In Your Brain

So how would one go about recording a visual and auditory neural experience and converting it into playback video? One possible solution would be to “listen in” on the chatter between neurons and glia in the representational parts of the brain, namely the visual and auditory cortices, and try to make sense of what the cellular conversation is about.

Think of it as watching a stage production with many actors talking in a cryptic language and trying to figure out the premise of the play. This is where the field of brain decoding comes in.

Given a spatial and temporal pattern of brain activity (an encoded message) while a subject is performing a certain task, how can one meaningfully decode the information contained in that pattern of activity?

As one might guess, the current version of brain decoding technology involves the techniques of fMRI and computational-statistical modeling. Using these techniques, one may literally “read” the mind.

Reported in a 2011 paper, Jack Gallant’s team at UC Berkeley pioneered a way to decode brain activity while a subject was watching a movie.

First, the researchers started off performing fMRI on subjects as they were watching short movie clips. Next, the researchers tried to come up with an algorithm that could reconstruct an image given the BOLD signal associated with it.

The better the algorithm, the closer the reconstructed image would resemble the actual image from the movie clip. To match the slower speed of fMRI data acquisition with the faster speed of seamless visual experience, the researchers incorporated mathematical filters into their algorithm.

Finally, to test the accuracy of this decoding algorithm, BOLD signals from unknown test movie clips were fed into the decoding algorithm.

Using a non-overlapping, random palette of YouTube videos, the model reported its predicted video reconstructions. Think of it as trying to reproduce a painting with a different set of watercolors than what the original painting was made from.

Since the model was constructed using fMRI data from a small region of the visual cortex, things like faces are well reconstructed, but certain other categories like abstract designs are not.

Mind-reading Electrodes

As cool as fMRI-enabled brain decoding is, one important factor limiting its power is resolution – both spatial and temporal. While fMRI is a very powerful technique, the data generated are still not directly indicative of the neuronal activity at the single-cell level.

What if we could directly record neuronal activity and use this to reconstruct visual scene information instead?

This is exactly what many neuroscientists have attempted to do with remarkable success using BCI. Interfacing electrical brain activity with that of a computer, neuroscientists can directly record neuronal responses and decode the causative stimulus.

As early as 1991, Wiliam Bialek and colleagues successfully estimated the nature of a visual stimulus based on electrophysiologically recorded responses of neurons in the visual cortex of blowflies. Then in 1998, Yang Dan and colleagues reported the reconstruction of cat vision using electrodes implanted in the lateral geniculate nucleus.

Fast forward to the present, modern BCI is truly the stuff of sci-fi. Take the example of this neural prosthetic device developed at Caltech, which enables a patient with severe spinal cord injury to control a robotic arm merely by thinking of the intention to do so.

An electrode array implanted in the posterior parietal cortex “listens in” on the neural signals generated when the patient intends to make a movement. This wire-tapped electrical information is then processed by a computer into instructions to make a robotic arm perform the same movement that the patient intended.

If an electrode array were implanted in the visual areas of the human brain instead of the parietal cortex, then would the results be the same, if not better than what Gallant’s team managed to do with fMRI?

An answer to this question may not be that distant.

In a recent PLOS Computational Biology article, Kai Miller and colleagues describe a pioneering method to decode human visual perception in near-real-time.

The researchers used a technique called electrocorticography (ECoG) which involves implanting electrodes on the surface of the brain enabling them to directly record neuronal electrical potential while subjects (epileptic patients) viewed still images of faces and houses. While other groups had previously attempted decoding visual stimuli using ECoG, they always used stimuli with pre-determined start times.

However, natural vision doesn’t happen at neatly pre-defined times; real-world visual stimuli is mostly spontaneous. To address this challenge, Miller et al. developed a novel computational method to predict in near-real-time if a subject was viewing a house or a face using only the ECoG signal.

Strikingly, their prediction had an accuracy rate of 96% with a timing error of only 20 ms. While we still have a way to go before entire movies can be faithfully reconstructed using ECoG, these decoding studies show us an exciting path moving into the future.

What’s more, in an earlier PLOS Biology paper, Brian Pasley and colleagues managed to reconstruct actual speech from ECoG signals acquired from the human auditory cortex. Here’s a representative sample of their reconstructed audio.

Neurotechnology: Promise or Premonition?

As electrodes become more advanced and imaging techniques – such as fiber photometry of neuronal dynamics and brain microendoscopics – mature, one can only imagine the immense benefit this would bring to human society.

But on the other side, similar to how the characters in Black Mirror discover, there will always be a sense of an underlying angst about any new form of technology. Only time will tell if the efforts of neurotechnology will cause exultation or the echo of “I’ve created a monster!”

Or maybe, like all great human endeavors, we will do a little of both.

References

Aflalo, T. et al. Neurophysiology. Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science 348, 906-910, doi:10.1126/science.aaa5417 (2015).

Bialek, W., Rieke, F., de Ruyter van Steveninck, R. R. & Warland, D. Reading a neural code. Science 252, 1854-1857 (1991).

Gunaydin, L. A. et al. Natural neural projection dynamics underlying social behavior. Cell 157, 1535-1551, doi:10.1016/j.cell.2014.05.017 (2014).

Hung, C. P., Kreiman, G., Poggio, T. & DiCarlo, J. J. Fast readout of object identity from macaque inferior temporal cortex. Science 310, 863-866, doi:10.1126/science.1117593 (2005).

Jung, J. C., Mehta, A. D., Aksay, E., Stepnoski, R. & Schnitzer, M. J. In vivo mammalian brain imaging using one- and two-photon fluorescence microendoscopy. Journal of neurophysiology 92, 3121-3133, doi:10.1152/jn.00234.2004 (2004).

Liu, H., Agam, Y., Madsen, J. R. & Kreiman, G. Timing, timing, timing: fast decoding of object information from intracranial field potentials in human visual cortex. Neuron 62, 281-290, doi:10.1016/j.neuron.2009.02.025 (2009).

Miller, K. J., Schalk, G., Hermes, D., Ojemann, J. G. & Rao, R. P. Spontaneous Decoding of the Timing and Content of Human Object Perception from Cortical Surface Recordings Reveals Complementary Information in the Event-Related Potential and Broadband Spectral Change. PLoS computational biology 12, e1004660, doi:10.1371/journal.pcbi.1004660 (2016).

Nishimoto, S. et al. Reconstructing visual experiences from brain activity evoked by natural movies. Current biology : CB 21, 1641-1646, doi:10.1016/j.cub.2011.08.031 (2011).

Pasley, B. N. et al. Reconstructing speech from human auditory cortex. PLoS biology 10, e1001251, doi:10.1371/journal.pbio.1001251 (2012).

Stanley, G. B., Li, F. F. & Dan, Y. Reconstruction of natural scenes from ensemble responses in the lateral geniculate nucleus. The Journal of neuroscience : the official journal of the Society for Neuroscience 19, 8036-8042 (1999).

Author: Aaron Sathyanesan. Republished courtesy of PLOS Blogs.