Advertisement

The Language of Learning

TIMES SCIENCE WRITER

The singsong crooning that every adult instinctively adopts for conversation with a newborn baby is more than patronizing gibberish passing between the generations.

Its melodious cadence, with distorted tones, variable pitch and drawn-out vowel sounds, is nature’s way of teaching every infant the phonetic building blocks of its native tongue, new research suggests.

So what seems like spontaneous baby talk is, in fact, more than terms of endearment designed to get an infant’s attention. It is a universal teaching mechanism rooted in the biology of language and the developing human brain, say neuroscientist Patricia Kuhl and her colleagues at the University of Washington in Seattle, who recently studied how native speakers of English, Swedish and Russian talk to infants.

Advertisement

Under the tutelage of such exaggerated speech, their work shows, infants at 20 weeks of age already are laying down crucial neural circuits fine-tuned to the nuances of vowel sounds common to human speech. And by 6 months of age--long before even the most precocious child knows a participle from a pacifier--a baby’s brain already responds to sounds differently depending on its mother tongue.

Linguists call the special tone adults reserve for speech with infants “parentese,” and the new research indicates it is the same in every culture around the world.

Other researchers have shown that babies pay more attention to parentese than to normal speech, even when it comes from a stranger. But until now, no one was sure why the infants found it so compelling.

Advertisement

“Parentese is a fairly unique signal, with a distinct acoustic signature in men and women, in old and young,” Kuhl said. “Our new study says it is not just a melody but it contains a tutorial on language.

“Six-monthers don’t know a single word,” she said. “They are cooing and smiling, but they also are listening to us--the caretakers--and their brains are mapping the information they hear. There is no outward sign the brain is processing the minute building blocks of language.”

The research, published recently in Science, is the latest finding in a series of studies assessing the impact of spoken language on the developing brain. The spoken word appears to have a remarkable effect on the budding neural circuits of a baby’s brain, many neuroscientists say.

Advertisement

Indeed, some believe the most important predictor of later intelligence, social skill and school performance is the number of different words a baby hears each day.

“Language development is a microcosm of what we are learning about the baby brain and the developing mind,” Kuhl said. It is the marriage of nature and nurture, she said: “Biology and culture don’t compete. They cooperate.”

To examine the role of parentese, Kuhl and her colleagues at the Early Intervention Institute in St. Petersburg, Russia, and Stockholm University in Sweden investigated differences in how American, Swedish and Russian mothers speak to their infants and to other adults.

The three languages were chosen because each has a significantly different number of vowel sounds. Russian has five vowel sounds, English nine and Swedish 16.

The researchers recorded 10 women from each of the three countries talking for 20 minutes to their babies, who ranged in age from 2 to 5 months. Then they recorded the same women talking to other adults.

The mothers were told to talk naturally, but were given a list of words containing three common vowel sounds and instructed to work them into their conversations.

Advertisement

For those speaking English, the target words were “bead” for its “ee” vowel sound, “pot” for its “ah” sound, and “boot” for its “oo” sound. Similar words were chosen from Russian and Swedish.

The researchers then used a spectrograph to analyze more than 2,300 instances of how the target words were used in the conversations and discovered that, in all three language groups, the speech directed at infants was stretched out to emphasize the vowel sounds, in contrast to the more normal tone used to adults.

They found that the mothers did not simply distort all the sounds and words they made--as they might have if they were just mimicking the way a child talks--but only those involving the key vowel sounds.

The researchers concluded that the exaggerated speech allowed the mothers to expand the sounds of the vowels so they would be more distinct from each other. It also appears to allow the mothers to produce a greater variety of vowel pronunciations without overlapping other vowel sounds.

“There is more information in the speech stream than we thought was there,” said Jean E. Andruski, who helped conduct the University of Washington research. She is a linguist who specializes is computerized speech synthesis at Eloquent Technology in Ithaca, N.Y.

Although the clues to vowels in parentese may hardly seem enough to point an infant toward the rudiments of anything as complex as human language, the researchers said the capacity to distinguish sounds was an essential beginning. And the imprinting was taking place at a time when the developing human brain is especially receptive to the world around it.

Advertisement

“They get some very important starts in speech perception in this garbled stream,” Andruski said. “The sounds you use in language are not infinitely variable. You have to figure out how your language sorts them out. What could be one sound in one language could be two sounds in another language.

“This [research] says they can do that at an amazingly young age,” she said. “Right from birth--if not before--they are hearing speech in a way that provides better separation of sound categories than we thought.”

Advertisement