Reading Thoughts To Decipher What A Person Is Actually Seeing
Following ground-breaking research showing that neurons in the human brain respond in an abstract manner to particular individuals or objects, University of Leicester researchers have now discovered that, from the firing of this type of neuron, they can tell what a person is actually seeing.
Image of neurons. Researchers found that four spikes of a few neurons are enough to identify what a person is seeing. (Credit: iStockphoto)
The original research by Dr R Quian Quiroga, of the University’s Department of Engineering, showed that one neuron fired to, for instance, Jennifer Aniston, another one to Halle Berry, another one to the Sydney Opera House, etc.
The responses were abstract. For example, the neuron firing to Halle Berry responded to several different pictures of her and even to the letters of her name, but not to other people or names.
This result, published in Nature in 2005 came from data from patients suffering from epilepsy. As candidates for epilepsy surgery, they are implanted with intracranial electrodes to determine as accurately as possible the area where the seizures originate. From that, clinicians can evaluate the potential outcome of curative surgery.
Dr Quian Quiroga’s latest research, which has appeared in the Journal of Neurophysiology, follows on from this.
Dr Quian Quiroga explained: “For example, if the ‘Jennifer Aniston neuron’ increases its firing then we can predict that the subject is seeing Jennifer Aniston. If the ‘Halle Berry neuron’ fires, then we can predict that the subject is seeing Halle Berry, and so on.
“To do this, we used and optimised a ‘decoding algorithms’, which is a mathematical method to infer the stimulus from the neuronal firing. We also needed to optimise our recording and data processing tools to record simultaneously from as many neurons as possible. Currently we are able to record simultaneously from up to 100 neurons in the human brain.
“In these experiments we presented a large database of pictures, and discovered that we can predict what picture the subject is seeing far above chance. So, in simple words, we can read the human thought from the neuronal activity.
“Once we reached this point, we then asked what are the most fundamental features of the neuronal firing that allowed us to make this predictions. This gave us the chance of studying basic principles of neural coding; i.e. how information is stored by neurons in the brain.
“For example, we found that there is a very limited time window in the neuronal firing that contains most of the information used for such predictions. Interestingly, neurons fired only 4 spikes in average during this time window. So, in another words, only 4 spikes of a few neurons are already telling us what the patient is seeing.”
Potential applications of this discovery include the development of Neural Prosthetic devices to be used by paralysed patients or amputees. A patient with a lesion in the spinal cord (as with the late Christopher Reeves), can still think about reaching a cup of tea with his arm, but this order is not transmitted to the muscles.
The idea of Neural Prostheses is to read these commands directly from the brain and transmit them to bionic devices such as a robotic arm that the patient could control directly from the brain.
Dr Quian Quiroga’s work showing that it is possible to read signals from the brain is a good step forward in this direction. But there are still clinical and ethical issues that have to be resolved before Neural Prosthetic devices can be applied in humans.
In particular, these would involve invasive surgery, which would have to be justified by a clear improvement for the patient before it could be undertaken.
J Neurophysiol 98: 1997-2007, 2007. First published August 1, 2007; doi:10.1152/jn.00125.2007
Decoding Visual Inputs From Multiple Neurons in the Human Temporal Lobe
R. Quian Quiroga1,2,3, L. Reddy2, C. Koch2 and I. Fried3,4 1Department of Engineering, University of Leicester, Leicester, United Kingdom; 2Computation and Neural Systems, California Institute of Technology, Pasadena; 3Division of Neurosurgery and Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, Los Angeles, California; and 4Functional Neurosurgery Unit, Tel Aviv Medical Center and Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
Submitted 4 February 2007; accepted in final form 28 July 2007
We investigated the representation of visual inputs by multiple simultaneously recorded single neurons in the human medial temporal lobe, using their firing rates to infer which images were shown to subjects. The selectivity of these neurons was quantified with a novel measure. About four spikes per neuron, triggered between 300 and 600 ms after image onset in a handful of units (7.8 on average), predicted the identity of images far above chance. Decoding performance increased linearly with the number of units considered, peaked between 400 and 500 ms, did not improve when considering correlations among simultaneously recorded units, and generalized to very different images. The feasibility of decoding sensory information from human extracellular recordings has implications for the development of brain–machine interfaces.