LANGUAGE, BRAIN, AND COGNITIVE DEVELOPMENT MEETING:
What Makes the Mind Dance and Count
Michael Balter

PARIS--Four dozen leading cognitive neuroscientists met here at the Collège de France from 3 to 5 May to share their latest data on topics such as amusia--an inability to perceive music--and number sense in infants. The "Language, Brain, and Cognitive Development" gathering was held in honor of Jacques Mehler, founder of the journal Cognition, who will soon retire from ?.

Wired for Sound, Not Music
Che Guevara was widely recognized as a man of many talents. Yet one talent the 1960s revolutionary lacked was the ability to hear music, a shortcoming he was acutely aware of. According to one account, Guevara was at a party one evening when he spotted a nurse he wanted to dance with. He asked a friend to give him a nudge when the orchestra struck up a tango. But the friend got the signal mixed up, sending Guevara out on the dance floor to dip and swirl his partner absurdly to the tune of a soft Brazilian samba.
Guevara suffered from congenital amusia, a nearly total tone deafness that turns music into mere noise. Although 5% or more of some populations suffer from this syndrome, it has not been widely studied. At the meeting, Isabelle Peretz of the University of Montreal reported preliminary results with amusical subjects that may support the hypothesis that the brain contains specific neural pathways for music.
Peretz studied 11 amusical adults who had a high level of education, didn't have any loss of hearing or other obvious neurological impairments, and had tried to take music lessons when children and thus had been exposed to music from an early age. These individuals, along with 67 control subjects, were given a battery of tests for musical ability and other cognitive skills, such as language ability. Most members of the amusical cohort were unable to detect when a tune, such as "Happy Birthday," was played with pitch alterations that made it clunk in the ears of the control subjects. One typical case, Peretz says, was that of "Monica," whose IQ measured 111 and who was working on a master's in health sciences. Despite functioning normally in all other areas of mental life, Monica could neither recognize nor sing familiar tunes such as "Frère Jacques," even though as a native of French-speaking Quebec she had been exposed to the song since infancy. The only nonmusical impairment that Peretz and her Montreal co-workers were able to identify was a decreased ability of some subjects to detect prosody, or pitch variations, in normal speech.
Thus, aside from impairing their singing and dancing skills, amusia may have seeped into some subjects' language abilities as well. Peter Jusczyk of Johns Hopkins University in Baltimore and others caution that this prosody deficit might complicate a neat picture by indicating a neural linkage between music and language pathways. "I would like to see better evidence that amusia can be fully disentangled from prosody in language," says Jusczyk. "Prosody, after all, refers to the musical aspect of language."
Despite the spillover between pure music comprehension and sensitivity to the subtle music within speech, Peretz concludes that her results and those from other studies are consistent with the idea that "there must be specialized neural systems for music," which amusical people lack from birth. Indeed, the notion that musical ability is hardwired into the brain has recently received support from studies of identical twins (Science, 9 March, pp. 1879 and 1969). Also bolstering this conclusion, Peretz says, are studies of brain-damaged patients who have lost their musical abilities, as well as studies of people with "musicogenic epilepsy," a rare condition in which seizures are triggered by music. Peretz says her team will now turn to techniques such as magnetic resonance imaging to try to pin down exactly where in the brain these neural circuits are located.
Peretz's study is receiving high marks. "There is a very strong case for specific neural pathways," says Uta Frith of University College London. But the findings beg the question of what adaptive purpose such hardwiring might serve. In several recent articles, Peretz has argued that the ability to hear music is an adaptation possibly designed to increase social cohesion among groups by providing them something to share. But some of her colleagues are skeptical. "She has provided compelling evidence that there are [neural] pathways for music," says Steven Pinker of the Massachusetts Institute of Technology in Cambridge. "But whether they were selected in the course of human evolution as opposed to being a byproduct of ... pathways that evolved for other purposes is still an open question."
Born to Enumerate?
Albert Einstein, describing how he arrived at such highly mathematical concepts as the theory of relativity, once wrote: "Words and language ... do not seem to play any part in my thought processes." For some cognitive scientists, such perceptions support the notion that our brains are equipped with a built-in "number sense," independent of language or other symbolic functions. At the meeting, Stanislas Dehaene of the French Atomic Energy Commission's neuroimaging lab in the Paris suburb of Orsay presented new evidence for this hypothesis from studies of infants.
Dehaene has long argued that our ability to perform calculations is rooted in two distinct brain regions. Exact arithmetic, he claims, is a cultural invention requiring number symbols--such as 1, 2, 3--and these calculations are carried out in left-hemisphere circuits also used for language. But approximate arithmetic, corresponding to a general number sense that has evolved in humans and some animals, is independent of language and can be mapped to parietal lobe circuits (Science, 7 May 1999, pp. 928 and 970). Dehaene has found support for this arithmetic duality in research he and other neuroscientists have conducted that demonstrates that babies, monkeys, and even rodents can distinguish numbers. Additional evidence comes from brain-damaged patients who have lost their ability to do arithmetic.
The new studies, which Dehaene carried out in collaboration with Ghislaine Dehaene-Lambertz of the French national research agency CNRS in Paris, investigated alterations in electrical activity in the brains of 4-month-old babies exposed to changes in number patterns. The babies' heads were covered with a light mesh made up of 64 electrodes. In the first stage of the experiments, the researchers presented numbers to the babies as tones, flashes of light, or spoken syllables. For example, the number 2 could be represented by two tones in quick succession. The electrodes recorded the resulting event-related potentials (ERPs) in the babies' brains.
In previous work on the ability of babies to distinguish spoken syllables, the pair had found that one ERP peak increased significantly whenever a novel syllable was heard. In the new studies of number sense, they found a similar effect. A peak that arose about 750 milliseconds after the stimulus decreased in intensity if the baby was repeatedly exposed, or habituated, to the same number--for example, the tones beep-beep, beep-beep, beep-beep. But if the last in the series of numbers was changed --such as beep-beep, beep-beep, beep-beep-beep--this peak shot back up to its prehabituation level. The effect was independent of the stimuli (tones, flashes, or spoken sounds), even if the stimuli were mixed in the same experiment--indicating, Dehaene said, that the babies were responding directly to changes in number.
Dehaene and others at the meeting interpret these results as further support for the idea that humans possess an intrinsic number sense long before they can speak or perform calculations. "These studies are wonderful," says Elizabeth Spelke of the Massachusetts Institute of Technology. "They fit in beautifully with the ensemble of evidence ... that there is a [brain] domain-specific, dedicated system for processing approximate [numbers]."
Such findings may help provide clues to the evolutionary origin of number sense. Dehaene's study "parallels very nicely the work on animals," says Marc Hauser of Harvard University, who presented similar results at the meeting from experiments on cotton-top tamarin monkeys. And Spelke praises the use of ERP measurements as a step forward in the study of how cognitive processes in babies develop, work that in the past has relied heavily on behavioral indicators such as how long an infant spends gazing at a stimulus: "This is better data than from virtually any of our behavioral methods to study infants."

Volume 292, Number 5522, Issue of 1 Jun 2001, pp. 1636-1637.
Science Magazine

Music of the Hemispheres
By
James Shreeve

Why can a toddler sing? Why is even the most ordinary human brain a library of melodies?

To look at her, you would never know that Isabelle X is missing a piece of her brain. Ten years ago, a swollen blood vessel burst in her left temporal lobe. When the surgeon opened her skull to excise the damaged tissue, he noticed another dangerously swollen vessel on the right side and prudently snipped that one out too. The operation saved her life, but at the price of a good portion of cerebral cortex. Now she sits in front of a video camera: a poised, attractive woman in her late thirties, wearing a stylish beige jacket over a black chemise. She doesn’t slur her words or stare vacantly. No muscular tic or twitch haunts her perfectly made-up face. What is most astonishing about Isabelle, in fact, is how utterly normal she is. At least until the music starts.

O Tannenbaum, O Tannenbaum, how lovely are your branches!

Plucked out on a piano offscreen, without lyrics, the old Christmas chestnut is instantly recognizable--or should be. When an investigator asks Isabelle to name the tune, she hesitates.

“A children’s song?” she answers.

“Try this one,” says the investigator.

Twinkle twinkle little star, how I wonder what you are. . . .

“I don’t think I know that one,” says Isabelle, a little sheepishly.

The investigator--psychologist Isabelle Peretz of the University of Montreal--asks her to name one more. The piano plays what must surely be North America’s most familiar ditty: Happy birthday to you, happy birthday to you!

Isabelle listens, then shakes her head.

“No,” she replies. “I don’t know it.”

Before her operation, Isabelle knew the song only too well; as the manager of a local restaurant, she was obliged to sing it to celebrating diners almost every night. While not a musician herself, Isabelle certainly has some musical background, and her brother is a well- known jazz band conductor. There is nothing wrong with her hearing per se: in other experiments, she easily recognizes people’s voices and has no trouble naming a tune when just a few snatches of its lyrics are read to her. Like other patients suffering from the clinical condition known as amusia, she can easily identify environmental sounds--a chicken clucking, a cock crowing, a baby crying. But no melody in the world--not even “Happy Birthday”--triggers so much as a wisp of recognition.

“This is the most serious case of amusia I have ever seen,” says Peretz.

That Isabelle cannot recognize music may be peculiar, but from a broader view, what is truly, profoundly odd is that the rest of us can.

“Every child will listen to the Barney song and sing it back again without prompting,” says Robert Zatorre, a neuropsychologist at the Montreal Neurological Institute at McGill University. “This is very different from an activity like reading, where exposure alone won’t do anything, no matter how long you sit in front of a book.”

Such talent, however, may not be too far removed from the abilities that enable an infant to learn to speak. Language and music are both forms of communication that rely on highly organized variations in sound pitches, stress, and rhythm. Both are rich in “harmonics”: the overtones above the primary frequency of a sound that give it resonance and purity. In language, sounds are combined into patterns--words--that refer to something other than themselves. This makes it possible for us to communicate complexities of information and meaning far beyond the capabilities of other species. But notes, chords, and melodies lack explicit meanings. So why does music exist? Is our appreciation of it a biological universal, or a cultural creation? Why does it have such power to stir our emotions? Does music serve some adaptive purpose, or is it nothing more than an exquisitely pointless epiphenomenon--like a talent for chess, or the ability to taste the overtones of plum or vanilla in a vintage wine?

“In Western society we’re inclined to think of music as something extra,” says Sandra Trehub, a developmental psychologist at the University of Toronto. “But you can’t find a culture that doesn’t have music. Everybody is listening.”

What they are listening to is nothing more than organized sound. In the sixth century b.c., the Greek philosopher Pythagoras observed that music pleasing to the ear was produced by plucking lengths of string that bore simple mathematical relationships to one another. The physical basis for this phenomenon, it was later discovered, lies in the frequencies of the sound waves that make up notes. For example, when the frequency of one note is twice that of a second, the two notes will sound like the same note, an octave apart. This principal of “octave equivalence” is present in all the world’s music systems; the notes that make up the scale between an octave interval do not always correspond to the familiar do re mi of Western music, but they all come back, so to speak, to do.

Other ear-pleasing intervals are also built on notes whose frequencies relate in simple ways. Anyone who plays a little guitar has experienced the supremacy of these “perfect consonances” in Western music today; whole anthologies of folk songs, blues, rock, and other popular music can be accompanied quite adequately by simply strumming chords that are built on the first, fourth, and fifth tones in a scale--say, C, F, and G. In fact, when the oldest known popular song--written down on a Sumerian clay tablet some 3,400 years ago--was exhumed and performed in 1974, the audience found, to its pleasure, that it sounded utterly familiar because its intervals were much like those found in the seven-tone scale of Western music. Many scales in the world’s major non-Western musical systems are also founded on octaves, fifths, and, to a lesser extent, fourths. One can’t help wondering if our partiality to these simple frequency ratios is based in our biology or if they are learned cultural preferences that just happen to be ancient and ubiquitous.

For several years Trehub has been trying to separate the natural elements of musical systems from the nurtured by using the clean, uncluttered infant mind as a filter. In one experiment, she and her colleagues played a series of repeated intervals to six-month-old babies, raising or lowering the interval occasionally to see if the infant responded to this deviation from the pattern. They found that the infants noticed the change when the test intervals were perfect fifths or fourths but not when they were composed of more complex frequency ratios--the very ones adult ears tend to regard as gratingly dissonant. This does not mean that we come into the world with “perfect-interval sensors” already in place, but at the very least, it suggests a powerful biological predisposition toward learning them is built into us from birth.

Might this predisposition be somehow linked to our innate capacity for language? The many elements shared by both music and language make such a notion appealing. But the specialization of the brain tells a different story. It has long been known that language is primarily, though not exclusively, a function of the left side of the brain. Patients with damage to a frontal region in the left hemisphere known as Broca’s area typically lose their ability to speak, while those with injuries farther back in the hemisphere, in what is called Wernicke’s area, often relinquish their ability to understand what is being said. Yet paradoxically, people who have suffered left hemisphere damage often retain the ability to sing. For that reason, neuroscientists have historically been tempted to view music too as a lateralized cognitive function, usually attributed to the right hemisphere. In light of the role of the right hemisphere in expressing and interpreting emotion, the notion seems particularly provocative. But the truth may be more complex.

Until recently, the only way to glimpse the underpinnings of music in the normal human brain was to see them ruptured, confused, or exposed in a damaged one. The Russian composer Vissarion Shebalin, for instance, suffered two left hemisphere strokes in the 1950s that left him unable to speak or understand the meaning of words--nonetheless he continued to teach and compose music, including a symphony that Shostakovich believed to be among his most brilliant works. Shebalin’s case is a mirror image of Isabelle X’s loss of music without loss of words, and it would support the notion that music and language play out on separate neural circuits in the brain’s two hemispheres.