07. December 2015

We do have a grammar in our head

A type of "internal grammar" helps us to identify sentences with no meaning as being grammatically correct. © Max Planck Society

New research presents evidence for innate understanding of language rules.

We all possess an internal grammar mechanism in our brain which becomes active when we process language. This decade-old theory appears now to have been proven by a new neuroscientific study and gives new support to the the universal grammar theory by renowned linguist Noam Chomsky. 

Noam Chomsky already believed in the second half of the 20th century that humans are born with a predisposition to understand and learn language. Scientists at the Max Planck Institute for Empirical Aesthetics and New York University have now confirmed an aspect of this theory. Using sophisticated tests they have shown how people are able to comprehend abstract, hierarchical structures - even if a sentence is a meaningless. There seems to be some sort of mechanism in the brain that ensures that the grammatical elements of a sentence can be hierarchically structured even if its content makes no sense.

 “One of the foundational elements of Chomsky’s work is that we have a grammar in our head, which underlies our processing of language,” explains David Poeppel, one of the authors of the study. “Our neurophysiological findings support this theory: we make sense of strings of words because our brains combine words into constituents in a hierarchical manner. This process reflects an ‘internal grammar’ mechanism.”

The research, which appears in the latest issue of the journal Nature Neuroscience, and is a collaboration between Max Planck Institute for Empirical Aesthetics and the New York University, builds on Chomsky’s work, Syntactic Structures (1957). According to Chomsky, we can recognize a phrase such as “Colourless green ideas sleep furiously” as both nonsensical and grammatically correct because we have an abstract knowledge base that allows us to make such distinctions even though, based on our experience, there is no statistical relation between the words.

Neuroscientists and psychologists predominantly reject this viewpoint. They believe that our comprehension does not result from an internal grammar, but that it is rather based on both statistical calculations between words and sound cues to structure. This would mean that we know from experience how sentences should be properly constructed. Many linguists, in contrast, argue that a central feature of language processing is the building of hierarchical structure.

In an effort to illuminate this debate, the researchers explored whether and how linguistic units are represented in the brain during speech comprehension. To do so, a series of experiments using magnetoencephalography (MEG) was conducted, which allows measurements of the tiny magnetic fields generated by brain activity. In addition, the researchers used electrocorticography (ECoG), a clinical technique for measuring brain activity in patients being monitored for neurosurgery.  

The study’s subjects listened to sentences in both English and Mandarin Chinese in which the hierarchical structure between words, phrases, and sentences was dissociated from intonational speech cues—the rise and fall of the voice—as well as statistical word cues. The sentences were presented in an isochronous fashion—identical timing between words—and participants listened to both predictable sentences (e.g., “New York never sleeps” or “Coffee keeps me awake”), grammatically correct, but less predictable sentences (e.g., “Pink toys hurt girls”), or word lists (“eggs jelly pink awake”) and various other manipulated sequences.

The design allowed the researchers to isolate how the brain concurrently tracks and processes different levels of linguistic abstraction—sequences of words (“furiously green sleep colourless”), phrases (“sleep furiously” “green ideas”), or sentences (“Colourless green ideas sleep furiously”)—while removing intonational speech cues and statistical word information, which many say are necessary in building sentences. 

Their results showed that the subjects’ brains distinctly tracked three components of the phrases they heard, reflecting a hierarchy in our neural processing of linguistic structures: words, phrases, and then sentences—at the same time. The rhythms in the brain, so-called neuronal oscillations, which underlie such processes of language comprehension, are adapted to the time structure of the respective language structure, i.e. faster rhythms track faster words; slower rhythms track phrases.  

“Because we went to great lengths to design experimental conditions that control for statistical or sound cue contributions to processing, our findings show that we must use the grammar in our head,” explains Poeppel. “Our brains lock onto every word before working to comprehend phrases and sentences. The dynamics reveal that we undergo a grammar-based construction in the processing of language.”

With this controversial conclusion, the researchers are rekindling an old debate because the notion of abstract, hierarchical, grammar-based structure building had more or less been rejected by researchers.

(AH/MG/MEZ)

 

Original publication:

Ding, N., Melloni, L., Zhang, H., Tian, X., & Poeppel, D. (2015). Cortical Tracking of Hierarchical LinguisticStructures in Connected Speech. Nature Neuroscience. doi:10.1038/nn.4186

 

Contact:

Prof. David Poeppel, Ph.D.
Max Planck Institute for Empirical Aesthetics, Frankfurt am Main

+49 69 8300479-301

david.poeppel@aesthetics.mpg.de

 

Dr. Anna Husemann
Research Coordination/PR
Max Planck Institute for Empirical Aesthetics, Frankfurt am Main

+49 69 8300479-650

anna.husemann@aesthetics.mpg.de