The brain, it turns out, may not walk the same path when deciphering the meaning of spoken language as it does when reading written words. Without our conscious effort, a team of University of California, San Francisco, neuroscientists discovered, our brains respond to more general acoustic features of words instead of identifying each individual sound segment or phoneme — such as the "t" sound in “table.” The UCSF team believes this gives listeners an advantage because articulation varies widely from speaker to speaker and may even change for a single speaker depending on time or circumstance. (Can you say drunken slur?)

"Our hope is that with this more complete knowledge of the building blocks and fundamental aspects of language, we can meaningfully think about how learning occurs," said UCSF neurosurgeon and neuroscientist Dr. Edward F. Chang, senior author of the new study. "We can maybe even explain why some of this goes awry." Someday, then, the team’s description of how the brain processes language may contribute to new therapies or teaching techniques for those with learning disabilities, such as dyslexia, or, more fancifully, help people learn a second language.

Neuroscience Plus Linguistics

Chang and his team began their study based on a simple premise: How exactly does the brain process speech? “The brain regions where speech is processed in the brain had been identified, but no one has really known how that processing happens,” Chang noted in a press statement. Wernicke’s area, which is tucked within the superior temporal gyrus (STG), is the brain region known to be involved in speech perception and the site where brain cells respond to the sound of individual phonemes, or so neuroscientists have assumed, even if this somewhat contradicts linguistic theory. Scientists of language organize spoken words into “features,” which are broad categories of sound. For instance, linguists place the consonants p, t, k, b, and d into the same “plosives” grouping because to make each of these sounds, a speaker creates a similar brief burst of air with their mouth and throat. Other sounds, such as those made by s, z, and v, together form the classification known as “fricatives,” because they create friction in the vocal tract. In other words, linguists organize language into broad types of sounds, which are less distinct than the individual letters that compose written words.

Using knowledge gained from both linguistic theory as well as brain science, the team of researchers set to work to design an experiment that would help them understand how the brain creates meaning form spoken words.

Acoustic Patterns

The researchers enlisted the help of six patients with epilepsy who, due to their condition, had electrodes placed on the surface of their brains in order to measure seizure activity. Seizing this unique opportunity, the researchers recorded electrical activity in the participants' STGs as they listened to 500 unique English sentences spoken by 400 different people. What exactly transpired in the listeners’ brains?

Surprisingly, the scientists discovered that the brain did not sort meaning the way one might expect — by focusing on individual letters to understand words and making sense of the whole from there. Instead, particular regions of the STG responded to general acoustic features rather than to individual consonant sounds made by individual phonemes like b or z. “When we hear someone talk, different areas in the brain ‘light up’ as we hear the stream of different speech elements,” said Dr. Nima Mesgarani, first author and assistant professor of electrical engineering at Columbia University.

Chang explained that the STG functions in a manner similar to our visual system. Just as our brain can detect general features, such as shape, and this ability allows us to reliably recognize, say, a chair, no matter where we stand and from what angle we view it, our brains also unconsciously detect general acoustic features to help us reliably recognize individual words, no matter how they are articulated by an individual speaker. “By studying all of the speech sounds in English, we found is that the brain has a systematic organization for basic sound feature units, kind of like elements in the periodic table,” Chang said. Our brains, then, are made to accommodate a wide spectrum of individual difference while deriving meaning from all that is spoken ... as well as all that is left unsaid.

Source: Chang EF, Johnson K, Cheung C, Mesgarani N. Phonetic Feature Encoding in Human Superior Temporal Gyrus. Science Express. 2014.