intellectual vanities… about close to everything

Archive for May 2010

Sweet Music – The Basis of Consonance

leave a comment »

Ever since ancient times, scholars have puzzled over the reasons that some musical note combinations sound so sweet while others are just downright dreadful. The Greeks believed that simple ratios in the string lengths of musical instruments were the key, maintaining that the precise mathematical relationships endowed certain chords with a special, even divine, quality. Twentieth-century composers, on the other hand, have leaned toward the notion that musical tastes are really all in what you are used to hearing.

Now, researchers think they may have gotten closer to the truth by studying the preferences of more than 250 college students from Minnesota to a variety of musical and nonmusical sounds. “The question is, what makes certain combinations of musical notes pleasant or unpleasant?” asks Josh McDermott, who conducted the studies at the University of Minnesota before moving to New York University. “There have been a lot of claims. It might be one of the oldest questions in perception.”
The University of Minnesota team, including collaborators Andriana Lehr and Andrew Oxenham, was able to independently manipulate both the harmonic frequency relations of the sounds and another quality known as beating. (Harmonic frequencies are all multiples of the same fundamental frequency, McDermott explains. For example, notes at frequencies of 200, 300, and 400 hertz are all multiples of 100. Beating occurs when two sounds are close but not identical in frequency. Over time, the frequencies shift in and out of phase with each other, causing the sound to wax and wane in amplitude and producing an audible “wobbling” quality.)
The researchers’ results show that musical chords sound good or bad mostly depending on whether the notes being played produce frequencies that are harmonically related or not. Beating didn’t turn out to be as important. Surprisingly, the preference for harmonic frequencies was stronger in people with experience playing musical instruments. In other words, learning plays a role — perhaps even a primary one, McDermott argues.
Whether you would get the same result in people from other parts of the world remains to be seen, McDermott says, but the effect of musical experience on the results suggests otherwise. “It suggests that Westerners learn to like the sound of harmonic frequencies because of their importance in Western music. Listeners with different experience might well have different preferences.” The diversity of music from other cultures is consistent with this. “Intervals and chords that are dissonant by Western standards are fairly common in some cultures,” he says. “Diversity is the rule, not the exception.”
That’s something that is increasingly easy to lose sight of as Western music has come to dominate radio waves all across the globe. “When all the kids in Indonesia are listening to Eminem,” McDermott says, “it becomes hard to get a true sense.”
Individual Differences Reveal the Basis of Consonance

Josh H. McDermott1,Andriana J. Lehr2 and Andrew J. Oxenham2
1 Center for Neural Science, New York University, New York, NY 10003, USA
2 Department of Psychology, University of Minnesota, Minneapolis, MN 55455, USA
Received 5 March 2010; revised 7 April 2010; accepted 8 April 2010. Published online: May 20, 2010. Available online 20 May 2010.
Summary Some combinations of musical notes are consonant (pleasant), whereas others are dissonant (unpleasant), a distinction central to music. Explanations of consonance in terms of acoustics, auditory neuroscience, and enculturation have been debated for centuries. We utilized individual differences to distinguish the candidate theories. We measured preferences for musical chords as well as nonmusical sounds that isolated particular acoustic factors—specifically, the beating and the harmonic relationships between frequency components, two factors that have long been thought to potentially underlie consonance . Listeners preferred stimuli without beats and with harmonic spectra, but across more than 250 subjects, only the preference for harmonic spectra was consistently correlated with preferences for consonant over dissonant chords. Harmonicity preferences were also correlated with the number of years subjects had spent playing a musical instrument, suggesting that exposure to music amplifies preferences for harmonic frequencies because of their musical importance. Harmonic spectra are prominent features of natural sounds, and our results indicate that they also underlie the perception of consonance.
Highlights ► Sounds with harmonic frequencies, and that lack beats, are preferred by listeners ► Only preference for harmonic spectra predicts preference for consonant chords ► Preferences for harmonic spectra, consonant chords correlate with musical experience ► Suggests harmonic frequency relations underlie perception of consonance

Written by huehueteotl

May 23, 2010 at 5:51 pm

Posted in Music, Neuroscience

Brain’s Language Areas – fMRI Investigations of Language

leave a comment »

Language is a defining aspect of what makes us human. Although some brain regions are known to be associated with language, neuroscientists have had a surprisingly difficult time using brain imaging technology to understand exactly what these ‘language areas’ are doing. In a new study published in the Journal of Neurophysiology, MIT neuroscientists report on a new method to analyze brain imaging data — one that may paint a clearer picture of how our brain produces and understands language.

Sample brain activations of a left frontal language area in three subjects. Activations vary substantially in their precise locations, plausibly due to brain anatomy differences between subjects. Traditional group analyses would only capture a small proportion of each subject’s activations and would underestimate the functional selectivity of these regions. (Credit: Evalina Fedorenko / MIT)

Research with patients who developed specific language deficits (such as the inability to comprehend passive sentences) following brain injury suggest that different aspects of language may reside in different parts of the brain. But attempts to find these functionally specific regions of the brain with current neuroimaging technologies have been inconsistent and controversial.
One reason for this inconsistency may be due to the fact that most previous studies relied on group analyses in which brain imaging data were averaged across multiple subjects — a computation that could introduce statistical noise and bias into the analyses.
“Because brains differ in their folding patterns and in how functional areas map onto these folds, activations obtained in functional MRI studies often do not precisely ‘line up’ across brains,” explained Evelina Fedorenko, first author of the study and a postdoctoral associate in Nancy Kanwisher’s lab at the McGovern Institute for Brain Research at MIT. ” Some regions of the brain thought to be involved in language are also geographically close to regions that support other cognitive processes like music, arithmetic, or general working memory. By spatially averaging brain data across subjects you may see an activation ‘blob’ that looks like it supports both language and, say, arithmetic, even in cases where in every single subject these two processes are supported by non-overlapping nearby bits of cortex.”
The only way to get around this problem, according to Fedorenko, is to first define “regions of interest” in each individual subject and then investigate those regions by examining their responses to various new tasks. To do this, they developed a “localizer” task where subjects read either sentences or sequences of pronounceable nonwords.
Sample sentence: THE DOG CHASED THE CAT ALL DAY LONG
Sample nonword sequence: BOKER DESH HE THE DRILES LER CICE FRISTY’S
By subtracting the nonword-activated regions from the sentence-activated regions, the researchers found a number of language regions that were quickly and reliably identified in individual brains. Their new method revealed higher selectivity for sentences compared to nonwords than a traditional group analysis applied to the same data.
“This new, more sensitive method allows us now to investigate questions of functional specificity between language and other cognitive functions, as well as between different aspects of language,” Fedorenko concludes. “We’re more likely to discover which patches of cortex are specialized for language and which also support other cognitive functions like music and working memory. Understanding the relationship between language and the rest of condition is one of key questions in cognitive neuroscience.”
Next Steps: Fedorenko published the tools used in this study on her website: http://web.mit.edu/evelina9/www/funcloc.html. The goal for the future, she argues, is to adopt a common standard for identifying language-sensitive areas so that knowledge about their functions can be accumulated across studies and across labs. “The eventual goal is of course to understand the precise nature of the computations each brain region performs,” Fedorenko says, “but that’s a tall order.”

J Neurophysiol (April 21, 2010). doi:10.1152/jn.00032.2010

Innovative Methodology

A new method for fMRI investigations of language: Defining ROIs functionally in individual subjects
Evelina Fedorenko1,*, Po-Jang Hsieh2, Alfonso Nieto Castanon3, Susan Whitfield-Gabrieli3 and Nancy Kanwisher4
1 MIT
2 Massachusetts Institute of Technology
3 4 Dept. of Brain and Cognitive Sciences, MIT

Submitted 13 January 2010; Revision received 29 March 2010. accepted in final form 15 April 2010

Abstract: Previous neuroimaging research has identified a number of brain regions sensitive to different aspects of linguistic processing, but precise functional characterization of these regions has proven challenging. We hypothesize that clearer functional specificity may emerge if candidate language-sensitive regions are identified functionally within each subject individually, a method that has revealed striking functional specificity in visual cortex but that has rarely been applied to neuroimaging studies of language. This method enables pooling of data from corresponding functional regions across subjects, rather than from corresponding locations in stereotaxic space (which may differ functionally because of the anatomical variability across subjects). However, it is far from obvious a priori that this method will work, as it requires that multiple stringent conditions be met. Specifically, candidate language-sensitive brain regions i) must be identifiable functionally within individual subjects in a short scan, ii) must be replicable within subjects and have clear correspondence across subjects, and iii) must manifest key signatures of language processing (e.g., a higher response to sentences than nonword strings, whether visual or auditory). We show here that this method does indeed work: we identify 13 candidate language-sensitive regions that meet these criteria, each present in at least 80 percent of subjects individually. The selectivity of these regions is stronger using our method than when standard group analyses are conducted on the same data, suggesting that the future application of this method may reveal clearer functional specificity than has been evident in prior neuroimaging research on language.

Key Words: fmri • language • individual subject analyses • functional specificity

Written by huehueteotl

May 23, 2010 at 5:37 pm

Posted in Neuroscience