Archive for August 2009
In a finding that sheds new light on the neural mechanisms involved in social behavior, neuroscientists at the California Institute of Technology (Caltech) have pinpointed the brain structure responsible for our sense of personal space.
Patient SM, a woman with complete bilateral amygdala lesions (red), preferred to stand close to the experimenter (black). On average, control participants (blue) preferred to stand nearly twice as far away from the same experimenter. Images drawn to scale
The discovery could offer insight into autism and other disorders where social distance is an issue.
The structure, the amygdala—a pair of almond-shaped regions located in the medial temporal lobes—was previously known to process strong negative emotions, such as anger and fear, and is considered the seat of emotion in the brain. However, it had never been linked rigorously to real-life human social interaction.
The scientists, led by Ralph Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology and postdoctoral scholar Daniel P. Kennedy, were able to make this link with the help of a unique patient, a 42-year-old woman known as SM, who has extensive damage to the amygdala on both sides of her brain.
“SM is unique, because she is one of only a handful of individuals in the world with such a clear bilateral lesion of the amygdala, which gives us an opportunity to study the role of the amygdala in humans,” says Kennedy, the lead author of the new report.
SM has difficulty recognizing fear in the faces of others, and in judging the trustworthiness of someone, two consequences of amygdala lesions that Adolphs and colleagues published in prior studies.
During his years of studying her, Adolphs also noticed that the very outgoing SM is almost too friendly, to the point of “violating” what others might perceive as their own personal space. “She is extremely friendly, and she wants to approach people more than normal. It’s something that immediately becomes apparent as you interact with her,” says Kennedy.
Previous studies of humans never had revealed an association between the amygdala and personal space. From their knowledge of the literature, however, the researchers knew that monkeys with amygdala lesions preferred to stay in closer proximity to other monkeys and humans than did healthy monkeys.
Intrigued by SM’s unusual social behavior, Adolphs, Kennedy, and their colleagues devised a simple experiment to quantify and compare her sense of personal space with that of healthy volunteers.
The experiment used what is known as the stop-distance technique. Briefly, the subject (SM or one of 20 other volunteers, representing a cross-section of ages, ethnicities, educations, and genders) stands a predetermined distance from an experimenter, then walks toward the experimenter and stops at the point where they feel most comfortable. The chin-to-chin distance between the subject and the experimenter is determined with a digital laser measurer.
Among the 20 other subjects, the average preferred distance was .64 meters—roughly two feet. SM’s preferred distance was just .34 meters, or about one foot. Unlike other subjects, who reported feelings of discomfort when the experimenter went closer than their preferred distance, there was no point at which SM became uncomfortable; even nose-to-nose, she was at ease. Furthermore, her preferred distance didn’t change based on who the experimenter was and how well she knew them.
“Respecting someone’s space is a critical aspect of human social interaction, and something we do automatically and effortlessly,” Kennedy says. “These findings suggest that the amygdala, because it is necessary for the strong feelings of discomfort that help to repel people from one another, plays a central role in this process. They also help to expand our understanding of the role of the amygdala in real-world social interactions.”
Adolphs and colleagues then used a functional magnetic resonance imaging (fMRI) scanner to examine the activation of the amygdala in a separate group of healthy subjects who were told when an experimenter was either in close proximity or far away from them. When in the fMRI scanner, subjects could not see, feel, or hear the experimenter; nevertheless, their amygdalae lit up when they believed the experimenter to be close by. No activity was detected when subjects thought the experimenter was on the other side of the room.
“It was just the idea of another person being there, or not, that triggered the amygdala,” Kennedy says. The study shows, he says, that “the amygdala is involved in regulating social distance, independent of the specific sensory cues that are typically present when someone is standing close, like sounds, sights, and smells.”
The researchers believe that interpersonal distance is not something we consciously think about, although, unlike SM, we become acutely aware when our space is violated. Kennedy recounts his own experience with having his personal space violated during a wedding: “I felt really uncomfortable, and almost fell over a chair while backing up to get some space.”
Across cultures, accepted interpersonal distances can vary dramatically, with individuals who live in cultures where space is at a premium (say, China or Japan) seemingly tolerant of much closer distances than individuals in, say, the United States. (Meanwhile, our preferred personal distance can vary depending on our situation, making us far more willing to accept less space in a crowded subway car than we would be at the office.)
One explanation for this variation, Kennedy says, is that cultural preferences and experiences affect the brain over time and how it responds in particular situations. “If you’re in a culture where standing close to someone is the norm, you’d learn that was acceptable and your personal space would vary accordingly,” he says. “Even then, if you violate the accepted cultural distance, it will make people uncomfortable, and the amygdala will drive that feeling.”
The findings may have relevance to studies of autism, a complex neurodevelopmental disorder that affects an individual’s ability to interact socially and communicate with others. “We are really interested in looking at personal space in people with autism, especially given findings of amygdala dysfunction in autism. We know that some people with autism do have problems with personal space and have to be taught what it is and why it’s important,” Kennedy says.
He also adds a word of caution: “It’s clear that amygdala dysfunction cannot account for all the social impairments in autism, but likely contributes to some of them and is definitely something that needs to be studied further.”
Nature Neuroscience, 2009; DOI: 10.1038/nn.2381
Personal space regulation by the human amygdala.
Daniel P Kennedy1, Jan Gläscher1, J Michael Tyszka2 & Ralph Adolphs1,2
1. Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, California, USA.
2. Division of Biology, California Institute of Technology, Pasadena, California, USA.
Correspondence to: Daniel P Kennedy1 e-mail: email@example.com
The amygdala plays key roles in emotion and social cognition, but how this translates to face-to-face interactions involving real people remains unknown. We found that an individual with complete amygdala lesions lacked any sense of personal space. Furthermore, healthy individuals showed amygdala activation upon close personal proximity. The amygdala may be required to trigger the strong emotional reactions normally following personal space violations, thus regulating interpersonal distance in humans.
Why is it that you can instantly recall your own phone number but have to struggle with your mental Rolodex to remember a new number you heard a few moments ago? The two tasks “feel” different because they involve two different types of memory – long-term and short-term, respectively – that are stored very differently in the brain. The same appears to be true across the animal kingdom, even in insects such as the fruit fly.
Assistant Professor Josh Dubnau, Ph.D., of Cold Spring Harbor Laboratory (CSHL) and his team have uncovered an important molecular and cellular basis of this difference using the fruit fly as a model.
The CSHL team has found that when fruit flies learn a task, each of two different groups of neurons that are part of the center of learning and memory in the fly brain simultaneously forms its own unique memory signal or trace. Both types of trace, the team discovered, depend on the activity of a gene called rutabaga, of which humans also have a similar version. A rapidly occurring, short-lived trace in a group of neurons that make up a structure called the “gamma” (γ) lobe produces a short-term memory. A slower, long-lived trace in the “alpha-beta” (αβ) lobe fixes a long-term memory.
Neuroscientists call the rutabaga gene a coincidence detector because it codes for an enzyme whose activity levels get a big boost when a fly perceives two stimuli that it has to learn to associate with one another. This enzymatic activity in turn signals to other genes critical for learning and memory.
A classic experiment that teaches flies to associate stimuli – and one that the CSHL team used – is to place them in a training tube attached to an electric grid, and to administer shocks through the grid right after a certain odor is piped into the tube. Flies with normal rutabaga genes learn to associate the odor with the shock and if given a choice, buzz away from the grid. But flies that carry a mutated version of rutabaga in their brains lack both short- and long-term memory, don’t learn the association, and so fail to avoid the shocks.
The team has now found, however, that this total memory deficit does not occur when flies carry the mutated version in either the γ or in the αβ lobes. Flies in which normal rutabaga function was restored within the γ lobe alone regained short-term memory but not long-term memory. Restoring the gene’s function in the αβ lobe alone restored long-term memory, but not short-term memory.
“This ability to independently restore either short- or long-term memory depending on where rutabaga is expressed supports the idea that there are different anatomical and circuit requirements for different stages of memory,” Dubnau explains. It also challenges a previously held notion that neurons that form short-term memory are also involved in storing long-term memory.
Previous biochemical studies have suggested that rapid, short-lived signals characteristic of short-term memory cause unstable changes in a neuron’s connectivity that are then stabilized by slower, long-lasting signals that help establish long-term memory in the same neuron. But anatomy studies have long hinted at different circuits. Surgical lesions that destroy different parts of an animal’s brain can separately disrupt the two kinds of memory, suggesting that the two memory types might involve different neuronal populations.
“We’ve now used genetics as a finer scalpel than surgery to reconcile these findings,” Dubnau says. His team’s results suggest that biochemical signaling for both types of memory are triggered at the same time, but in different neuron sets. Memory traces form more quickly in one set than the other, but the set that lags behind consolidates the memory and stores it long-term.
But why might the fly brain divide up the labor of storing different memory phases this way? Dubnau’s hunch is that it might be because for every stimulus it receives, the brain creates its own representation of this information. And each time this stimulus – for example, an odor – is perceived again, the brain adds to the representation and modifies it. “Such modifications might eventually disrupt the brain’s ability to accurately remember that information,” Dubnau speculates. “It might be better to store long-term memories in a different place where there’s no such flux.”
The team’s next mission is to determine how much cross talk, if any, is required between the two lobes for long-term memory to get consolidated. This work will add to the progress that scientists have already made in treating memory deficits in humans with drugs aimed at molecular members of the rutabaga-signaling pathway to enhance its downstream effects.
Current Biology, Volume 19, Issue 16, Pages 1341-1350; DOI: 10.1016/j.cub.2009.07.016
Short- and Long-Term Memory in Drosophila Require cAMP Signaling in Distinct Neuron Types.
Allison. L. Blum, Wanhe Li, Mike Cressy, and Josh Dubnau.
A common feature of memory and its underlying synaptic plasticity is that each can be dissected into short-lived forms involving modification or trafficking of existing proteins and long-term forms that require new gene expression. An underlying assumption of this cellular view of memory consolidation is that these different mechanisms occur within a single neuron. At the neuroanatomical level, however, different temporal stages of memory can engage distinct neural circuits, a notion that has not been conceptually integrated with the cellular view.Here, we investigated this issue in the context of aversive Pavlovian olfactory memory in Drosophila. Previous studies have demonstrated a central role for cAMP signaling in the mushroom body (MB). The Ca2+-responsive adenylyl cyclase RUTABAGA is believed to be a coincidence detector in neurons, one of the three principle classes of MB Kenyon cells. We were able to separately restore short-term or long-term memory to a rutabaga mutant with expression of rutabaga in different subsets of MB neurons.Our findings suggest a model in which the learning experience initiates two parallel associations: a short-lived trace in MB neurons, and a long-lived trace in / neurons.
Attention, multitaskers (if you can pay attention, that is): Your brain may be in trouble.
People who are regularly bombarded with several streams of electronic information do not pay attention, control their memory or switch from one job to another as well as those who prefer to complete one task at a time, a group of Stanford researchers has found.
High-tech jugglers are everywhere – keeping up several e-mail and instant message conversations at once, text messaging while watching television and jumping from one website to another while plowing through homework assignments.
But after putting about 100 students through a series of three tests, the researchers realized those heavy media multitaskers are paying a big mental price.
“They’re suckers for irrelevancy,” said communication Professor Clifford Nass, one of the researchers. “Everything distracts them.”
Social scientists have long assumed that it’s impossible to process more than one string of information at a time. The brain just can’t do it. But many researchers have guessed that people who appear to multitask must have superb control over what they think about and what they pay attention to.
So Nass and his colleagues, Eyal Ophir and Anthony Wagner, set out to learn what gives multitaskers their edge. What is their gift?
“We kept looking for what they’re better at, and we didn’t find it,” said Ophir, the study’s lead author and a researcher in Stanford’s Communication Between Humans and Interactive Media Lab.
In each of their tests, the researchers split their subjects into two groups: those who regularly do a lot of media multitasking and those who don’t.
In one experiment, the groups were shown sets of two red rectangles alone or surrounded by two, four or six blue rectangles. Each configuration was flashed twice, and the participants had to determine whether the two red rectangles in the second frame were in a different position than in the first frame.
They were told to ignore the blue rectangles, and the low multitaskers had no problem doing that. But the high multitaskers were constantly distracted by the irrelevant blue images. Their performance was horrible.
Because the high multitaskers showed they couldn’t ignore things, the researchers figured they were better at storing and organizing information. Maybe they had better memories.
The second test proved that theory wrong. After being shown sequences of alphabetical letters, the high multitaskers did a lousy job at remembering when a letter was making a repeat appearance.
“The low multitaskers did great,” Ophir said. “The high multitaskers were doing worse and worse the further they went along because they kept seeing more letters and had difficulty keeping them sorted in their brains.”
Puzzled but not yet stumped on why the heavy multitaskers weren’t performing well, the researchers conducted a third test. If the heavy multitaskers couldn’t filter out irrelevant information or organize their memories, perhaps they excelled at switching from one thing to another faster and better than anyone else.
Wrong again, the study found.
The test subjects were shown images of letters and numbers at the same time and instructed what to focus on. When they were told to pay attention to numbers, they had to determine if the digits were even or odd. When told to concentrate on letters, they had to say whether they were vowels or consonants.
Again, the heavy multitaskers underperformed the light multitaskers.
“They couldn’t help thinking about the task they weren’t doing,” Ophir said. “The high multitaskers are always drawing from all the information in front of them. They can’t keep things separate in their minds.”
The researchers are still studying whether chronic media multitaskers are born with an inability to concentrate or are damaging their cognitive control by willingly taking in so much at once. But they’re convinced the minds of multitaskers are not working as well as they could.
“When they’re in situations where there are multiple sources of information coming from the external world or emerging out of memory, they’re not able to filter out what’s not relevant to their current goal,” said Wagner, an associate professor of psychology. “That failure to filter means they’re slowed down by that irrelevant information.”
So maybe it’s time to stop e-mailing if you’re following the game on TV, and rethink singing along with the radio if you’re reading the latest news online. By doing less, you might accomplish more.
Cognitive control in media multitaskersPNAS published online before print August 24, 2009, doi:10.1073/pnas.0903620106
Cognitive control in media multitaskers
Eyal Ophir, Clifford Nass, and Anthony D. Wagner
Abstract Chronic media multitasking is quickly becoming ubiquitous, although processing multiple incoming streams of information is considered a challenge for human cognition. A series of experiments addressed whether there are systematic differences in information processing styles between chronically heavy and light media multitaskers. A trait media multitasking index was developed to identify groups of heavy and light media multitaskers. These two groups were then compared along established cognitive control dimensions. Results showed that heavy media multitaskers are more susceptible to interference from irrelevant environmental stimuli and from irrelevant representations in memory. This led to the surprising result that heavy media multitaskers performed worse on a test of task-switching ability, likely due to reduced ability to filter out interference from the irrelevant task set. These results demonstrate that media multitasking, a rapidly growing societal trend, is associated with a distinct approach to fundamental information processing.
* attention * cognition * executive function * multitasking * working memory
When it comes to potential mates, women may be as complicated as men claim they are, according to psychologists.
“We have found that women evaluate facial attractiveness on two levels — a sexual level, based on specific facial features like the jawbone, cheekbone and lips, and a nonsexual level based on overall aesthetics,” said Robert G. Franklin, graduate student in psychology working with Reginald Adams, assistant professor of psychology and neurology, Penn State. “At the most basic sexual level, attractiveness represents a quality that should increase reproductive potential, like fertility or health.”
On the nonsexual side, attractiveness can be perceived on the whole, where brains judge beauty based on the sum of the parts they see.
“But up until now, this (dual-process) concept had not been tested,” Franklin explained. The researchers report the findings of their tests in the current issue of the Journal of Experimental Social Psychology.
To study how women use these methods of determining facial attractiveness, the psychologists showed fifty heterosexual female college students a variety of male and female faces. They asked the participants to rate what they saw as both hypothetical dates and hypothetical lab partners on a scale of one to seven. The first question was designed to invoke a sexual basis of determining attractiveness, while the second was geared to an aesthetic one. This part of the experiment served as a baseline for next phase.
The psychologists then presented the same faces to another set of fifty heterosexual female students. Some of these faces, however, were split horizontally, with the upper and lower halves shifted in opposite directions. The scientists asked these participants to rate the overall attractiveness of the split and whole faces on the same scale.
Split face photo used in evaluation of how women determine facial attractiveness by Robert G. Franklin, graduate student in psychology and Reginald Adams, assistant professor of psychology and neurology, Penn State. (Credit: Robert G. Franklin, Penn State)
By dividing the faces in half and disrupting the test subjects’ total facial processing, the researchers believed that women would rely more on specific facial features to determine attractiveness. They thought that this sexual route would come into play particularly when the participants saw faces that were suited as hypothetical dates rather than lab partners. The study showed exactly that.
“The whole face ratings of the second group correlated better with the nonsexual ‘lab partner’ ratings of the first group.” Franklin said. With the faces intact, the participants could evaluate them on an overall, nonsexual level.
“The split face ratings of the second group also correlated with the nonsexual ratings of the first group when the participants were looking at female faces,” he added. “The only change occurred when we showed the second group split, male faces. These ratings correlated better with the ‘hypothetical date’ ratings of the first group.”
The bottom line is that, at a statistically significant level, splitting the faces in half made the women rely on a purely sexual strategy of processing male faces. The study verifies that these two ways of assessing facial appeal exist and can be separated for women.
“We do not know whether attractiveness is a cultural effect or just how our brains process this information,” Franklin admitted. “In the future, we plan to study how cultural differences in our participants play a role in how they rate these faces. We also want to see how hormonal changes women experience at different stages in the menstrual cycle affect how they evaluate attractiveness on these two levels.”
Researchers have long known that women’s biological routes of sexual attraction derive from an instinctive reproductive desire, relying on estrogen and related hormones to regulate them. The overall aesthetic approach is a less reward-based function, driven by progesterone.
How this complex network of hormones interacts and is channeled through the conscious brain and the human culture that shapes it is a mystery.
“It is a complicated picture,” Franklin added. “We are trying to find what features in the brain are at play, here.”
Journal of Experimental Social Psychology Volume 45, Issue 5, September 2009 Pages 1156-1159
A dual-process account of female facial attractiveness preferences: Sexual and nonsexual routes
Robert G. Franklin Jr., Reginald B. Adams Jr.
Abstract The current study conceptualizes facial attractiveness as a dual-process judgment, combining sexual and aesthetic value. We hypothesized that holistic face processing is more integral to perceiving aesthetic preference and feature-based processing is more integral to sexual preference. In order to manipulate holistic versus feature-based processing, we used a variation of the composite face paradigm. Previous work indicates that slightly shifting the top from the bottom half of a face disrupts holistic processing and enhances feature-based processing. In the present study, while nonsexual judgments best explained facial attraction in whole-face images, a reversal occurred for split-face images such that sexual judgments best explained facial attraction, but only for mate-relevant faces (i.e., other-sex). These findings indicate that disrupting holistic processing can decouple sexual from nonsexual judgments of facial attraction, thereby establishing the presence of a dual-process.
Bats, birds, box turtles, humans and many other animals share at least one thing in common: They sleep. Humans, in fact, spend roughly one-third of their lives asleep, but sleep researchers still don’t know why.
According to the journal Science, the function of sleep is one of the 125 greatest unsolved mysteries in science. Theories range from brain “maintenance” — including memory consolidation and pruning — to reversing damage from oxidative stress suffered while awake, to promoting longevity. None of these theories are well established, and many are mutually exclusive.
Now, a new analysis by Jerome Siegel, UCLA professor of psychiatry and director of the Center for Sleep Research at the Semel Institute for Neuroscience and Human Behavior at UCLA and the Sepulveda Veterans Affairs Medical Center, has concluded that sleep’s primary function is to increase animals’ efficiency and minimize their risk by regulating the duration and timing of their behavior.
“Sleep has normally been viewed as something negative for survival because sleeping animals may be vulnerable to predation and they can’t perform the behaviors that ensure survival,” Siegel said. These behaviors include eating, procreating, caring for family members, monitoring the environment for danger and scouting for prey.
“So it’s been thought that sleep must serve some as-yet unidentified physiological or neural function that can’t be accomplished when animals are awake,” he said.
Siegel’s lab conducted a new survey of the sleep times of a broad range of animals, examining everything from the platypus and the walrus to the echidna, a small, burrowing, egg-laying mammal covered in spines. The researchers concluded that sleep itself is highly adaptive, much like the inactive states seen in a wide range of species, starting with plants and simple microorganisms; these species have dormant states — as opposed to sleep — even though in many cases they do not have nervous systems. That challenges the idea that sleep is for the brain, said Siegel.
“We see sleep as lying on a continuum that ranges from these dormant states like torpor and hibernation, on to periods of continuous activity without any sleep, such as during migration, where birds can fly for days on end without stopping,” he said.
Hibernation is one example of an activity that regulates behavior for survival. A small animal, Siegel noted, can’t migrate to a warmer climate in winter. So it hibernates, effectively cutting its energy consumption and thus its need for food, remaining secure from predators by burrowing underground.
Sleep duration, then, is determined in each species by the time requirements of eating, the cost-benefit relations between activity and risk, migration needs, care of young, and other factors. However, unlike hibernation and torpor, Siegel said, sleep is rapidly reversible — that is, animals can wake up quickly, a unique mammalian adaptation that allows for a relatively quick response to sensory signals.
Humans fit into this analysis as well. What is most remarkable about sleep, according to Siegel, is not the unresponsiveness or vulnerability it creates but rather that ability to reduce body and brain metabolism while still allowing that high level of responsiveness to the environment.
“The often cited example is that of a parent arousing at a baby’s whimper but sleeping through a thunderstorm,” he said. “That dramatizes the ability of the sleeping human brain to continuously process sensory signals and trigger complete awakening to significant stimuli within a few hundred milliseconds.”
In humans, the brain constitutes, on average, just 2 percent of total body weight but consumes 20 percent of the energy used during quiet waking, so these savings have considerable adaptive significance. Besides conserving energy, sleep invokes survival benefits for humans too — “for example,” said Siegel, “a reduced risk of injury, reduced resource consumption and, from an evolutionary standpoint, reduced risk of detection by predators.”
“This Darwinian perspective can explain age-related changes in human sleep patterns as well,” he said. “We sleep more deeply when we are young, because we have a high metabolic rate that is greatly reduced during sleep, but also because there are people to protect us. Our sleep patterns change when we are older, though, because that metabolic rate reduces and we are now the ones doing the alerting and protecting from dangers.”
Nature Reviews Neuroscience, advance online publication, Published online 5 August 2009 | doi:10.1038/nrn2697Corrected online: 6 August 2009
Article series: Sleep
Opinion: Sleep viewed as a state of adaptive inactivity
Jerome M. Siegel
Department of Psychiatry, School of Medicine, University of California, Los Angeles, California 90095, USA and at Neurobiology Research (151-A3), Veterans Affairs Greater Los Angeles Healthcare System, North Hills, California 91343, USA.
Correspondence to: Jerome M. Siegel1 Email: JSiegel@ucla.edu
Abstract Sleep is often viewed as a vulnerable state that is incompatible with behaviours that nourish and propagate species. This has led to the hypothesis that sleep has survived because it fulfills some universal, but as yet unknown, vital function. I propose that sleep is best understood as a variant of dormant states seen throughout the plant and animal kingdoms and that it is itself highly adaptive because it optimizes the timing and duration of behaviour. Current evidence indicates that ecological variables are the main determinants of sleep duration and intensity across species.
Personality traits associated with chronic worrying can lead to earlier death, at least in part because these people are more likely to engage in unhealthy behaviors, such as smoking, according to research from Purdue University.
“Research shows that higher levels of neuroticism can lead to earlier mortality, and we wanted to know why,” said Daniel K. Mroczek, (pronounced Mro-ZAK) a professor of child development and family studies. “We found that having worrying tendencies or being the kind of person who stresses easily is likely to lead to bad behaviors like smoking and, therefore, raise the mortality rate.
“This work is a reminder that high levels of some personality traits can be hazardous to one’s physical health.”
Chronic worrying, anxiety and being prone to depression are key aspects of the personality trait of neuroticism. In this study, the researchers looked at how smoking and heavy drinking are associated with the trait. A person with high neuroticism is likely to experience anxiety or depression and may self-medicate with tobacco, alcohol or drugs as a coping mechanism.
They found that smoking accounted for about 25 percent to 40 percent of the association between high neuroticism and mortality. The other 60 percent is unexplained, but possibly attributed to biological factors or other environmental issues that neurotic individuals experience, Mroczek said.
The researchers analyzed data of 1,788 men and their smoking behavior and personality traits over a 30-year period from 1975 to 2005. The data was part of the VA Normative Aging Study, which is a long-term study of aging men based at the Boston VA Outpatient Clinic.
Mroczek and his co-authors, Avron Spiro III and Nicholas A. Turiano, published their findings in this month’s Journal of Research in Personality.
A better understanding of the bridge between personality traits and physical health can perhaps help clinicians improve intervention and prevention programs, Mroczek said.
“For example, programs that target people high in neuroticism may get bigger bang for the buck than more widespread outreach efforts,” he said. “It also may be possible to use personality traits to identify people who, because of their predispositions, are at risk for engaging in poor health behaviors such as smoking or excessive drinking.”
Journal of Research in Personality Volume 43, Issue 4, August 2009, Pages 653-659
Do health behaviors explain the effect of neuroticism on mortality? Longitudinal findings from the VA Normative Aging Study
Daniel K. Mroczeka, Corresponding Author Contact Information, E-mail The Corresponding Author, Avron Spiro IIIb, c and Nicholas A. Turianoa
aPurdue University, Department of Child Development and Family Studies, 1200 W. State St., West Lafayette, IN, 47907, United States
bNormative Aging Study, VA Boston Healthcare System, Boston, MA, United States
cBoston University School of Public Health, Boston, MA, United States
Available online 7 April 2009.
Studies have shown that higher levels of neuroticism are associated with greater risk of mortality. Yet what accounts for this association? One major theoretical position holds that persons higher in neuroticism engage in poorer health behaviors, such as smoking and excessive drinking, thus leading to earlier death. We tested this hypothesis using 30-year mortality in 1788 men from the VA Normative Aging Study. Using proportional hazards (Cox) models we found that one health behavior, smoking, attenuated the effect of neuroticism on mortality by 40%. However, 60% remained unexplained, suggesting that the effects of other pathways (e.g., biological) also influence the relationship between neuroticism and mortality.
Keywords: Neuroticism; Personality and health; Personality and mortality; Health behaviors; Smoking
With nothing to guide their way, people attempting to walk a straight course through unfamiliar territory really do end up walking in circles, according to a report published online on August 20th in Current Biology, a Cell Press publication. Although that belief has pervaded popular culture, there has been no scientific evidence to back it up until now, according to the researchers.
“The stories about people who end up walking in circles when lost are actually true,” said Jan Souman of the Max Planck Institute for Biological Cybernetics in Germany. “People cannot walk in a straight line if they do not have absolute references, such as a tower or a mountain in the distance or the sun or moon, and often end up walking in circles.”
Those circular paths are rarely systematic, the researchers show. The same person may sometimes veer to the left, then again to the right, before ending up back where they started from, Souman said. That rules out one potential explanation for the phenomenon: that circle-walking stems from some systematic bias to turn in one direction, such as differences in leg length or strength. It seems that the circles rather emerge naturally through “random drift” in where an individual thinks straight ahead to be, Souman said.
The researchers tested the idea in both forest and desert environments. Participants were instructed to walk as straight as they could in one direction, and their trajectory was recorded via GPS. Six people walked for several hours in a large, flat forest—four on a cloudy day with the sun hidden. Those four all walked in circles, with three of them repeatedly crossing their own paths without noticing it. In contrast, when the sun was out, two other participants followed an almost perfectly straight course, except during the first 15 minutes, when the sun was still hidden behind some clouds.
Three other participants walked for several hours in the Sahara desert, in southern Tunisia. Two of them, who walked during the heat of the day, veered from the course they were instructed to follow but did not walk in circles. The third walked at night, at first by the light of a full moon. Only after the moon disappeared behind the clouds did he make several sharp turns, bringing him back in the direction he started from.
In other tests, blindfolded people walked in surprisingly small circles, though rarely showing a tendency to travel in any particular direction. That result led the researchers to suggest that the inability to stick to a straight course results from accumulating “noise” in the sensorimotor system. Without an external directional reference to recalibrate the subjective sense of straight ahead, that “noise” may cause people to walk in circles, the researchers said.
Souman’s group plans to study this tendency under more controlled conditions by asking subjects to walk through a virtual-reality forest on a special treadmill they have built, which allows a person to travel in any direction they choose. These tests will make it possible for the researchers to isolate the various factors that might play a role, such as the availability of the sun or other landmarks, and to study their contributions to walking straight—or in circles.
Current Biology, 20 August 2009 doi:10.1016/j.cub.2009.07.053
Walking Straight into Circles
Jan L. Souman1,Corresponding Author,,Ilja Frissen1,2,Manish N. Sreenivasa1,3andMarc O. Ernst1
1 Multisensory Perception and Action Group, Max Planck Institute for Biological Cybernetics, Spemannstrae 41, 72076 Tübingen, Germany
2 Multimodal Interaction Lab, McGill University, 3459 Rue McTavish, Montreal, QC H3A 1Y1, Canada
3 Laboratoire d’Analyse et d’Architecture des Systèmes, Centre National de Recherche Scientifique, 7 Avenue du Colonel Roche, 31077 Toulouse, France
Common belief has it that people who get lost in unfamiliar terrain often end up walking in circles. Although uncorroborated by empirical data, this belief has widely permeated popular culture. Here, we tested the ability of humans to walk on a straight course through unfamiliar terrain in two different environments: a large forest area and the Sahara desert. Walking trajectories of several hours were captured via global positioning system, showing that participants repeatedly walked in circles when they could not see the sun. Conversely, when the sun was visible, participants sometimes veered from a straight course but did not walk in circles. We tested various explanations for this walking behavior by assessing the ability of people to maintain a fixed course while blindfolded. Under these conditions, participants walked in often surprisingly small circles (diameter < 20 m), though rarely in a systematic direction. These results rule out a general explanation in terms of biomechanical asymmetries or other general biases. Instead, theysuggest that veering from a straight course is the resultof accumulating noise in the sensorimotor system, which, without an external directional reference to recalibrate the subjective straight ahead, may cause people to walk in circles.