Brain Imaging Shows If You Are Thinking Of Familiar Object
A team of Carnegie Mellon University computer scientists and cognitive neuroscientists, combining methods of machine learning and brain imaging, have found a way to identify where people’s thoughts and perceptions of familiar objects originate in the brain by identifying the patterns of brain activity associated with the objects. This new method was developed over two years under the leadership of neuroscientist Professor Marcel Just and Computer Science Professor Tom M. Mitchell.
A dozen study participants enveloped in an MRI scanner were shown line drawings of 10 different objects — five tools and five dwellings –one at a time and asked to think about their properties. Just and Mitchell’s method was able to accurately determine which of the 10 drawings a participant was viewing based on their characteristic whole-brain neural activation patterns. To make the task more challenging for themselves, the researchers excluded information in the brain’s visual cortex, where raw visual information is available, and focused more on the “thinking” parts of the brain.
The scientists found that the activation pattern evoked by an object wasn’t located in just one place in the brain. For instance, thinking about a hammer activated many locations. How you swing a hammer activated the motor area, while what a hammer is used for, and the shape of a hammer activated other areas.
According to Just and Mitchell, this is the first study to report the ability to identify the thought process associated with a single object. While earlier work showed it is possible to distinguish broad categories of objects such as “tools” versus “buildings,” this new research shows that it is possible to distinguish between items with very similar meanings, like two different tools. The machine-learning method involves training a computer algorithm (a set of mathematical rules) to extract the patterns from a participant’s brain activation, using data collected in one part of the study, and then testing the algorithm on data in an independent part of the same study. In this way, the algorithm is never previously exposed to the patterns on which it is tested.
Another important question addressed by the study was whether different brains exhibit the same or different activity patterns to encode these individual objects. To answer this question, the researchers tried identifying objects represented in one participant’s brain after training their algorithms using data collected from other participants. They found that the algorithm was indeed able to identify a participant’s thoughts based on the patterns extracted from the other participants.
“This part of the study establishes, as never before, that there is a commonality in how different people’s brains represent the same object,” said Mitchell, head of the Machine Learning Department in Carnegie Mellon’s School of Computer Science and a pioneer in applying machine learning methods to the study of brain activity. “There has always been a philosophical conundrum as to whether one person’s perception of the color blue is the same as another person’s. Now we see that there is a great deal of commonality across different people’s brain activity corresponding to familiar tools and dwellings.”
“This first step using computer algorithms to identify thoughts of individual objects from brain activity can open new scientific paths, and eventually roads and highways,” added Svetlana Shinkareva, an assistant professor of psychology at the University of South Carolina who is the study’s lead author. “We hope to progress to identifying the thoughts associated not just with pictures, but also with words, and eventually sentences.”
Just, who directs the Center for Cognitive Brain Imaging at Carnegie Mellon, noted that one application the team is excited about is comparing the activation patterns of people with neurological disorders, such as autism. “We are looking forward to determining how people with autism neurally represent social concepts such as friend and happy,” he said. Just also is developing a brain-based theory of autism. “People with autism perceive others in a distinctive way that has been difficult to characterize,” he explained. “This machine learning approach offers a way to discover that characterization.”
Shinkareva SV, Mason RA, Malave VL, Wang W, Mitchell TM, et al. (2008)
Using fMRI Brain Activation to Identify Cognitive States Associated with Perception of Tools and Dwellings.
PLoS ONE 3(1): e1394. doi:10.1371/journal.pone.0001394
Previous studies have succeeded in identifying the cognitive state corresponding to the perception of a set of depicted categories, such as tools, by analyzing the accompanying pattern of brain activity, measured with fMRI. The current research focused on identifying the cognitive state associated with a 4s viewing of an individual line drawing (1 of 10 familiar objects, 5 tools and 5 dwellings, such as a hammer or a castle). Here we demonstrate the ability to reliably (1) identify which of the 10 drawings a participant was viewing, based on that participant’s characteristic whole-brain neural activation patterns, excluding visual areas; (2) identify the category of the object with even higher accuracy, based on that participant’s activation; and (3) identify, for the first time, both individual objects and the category of the object the participant was viewing, based only on other participants’ activation patterns. The voxels important for category identification were located similarly across participants, and distributed throughout the cortex, focused in ventral temporal perceptual areas but also including more frontal association areas (and somewhat left-lateralized). These findings indicate the presence of stable, distributed, communal, and identifiable neural states corresponding to object concepts.