28. May 2017

Prof. Lucia Melloni, PhD

Please click here to download the intended video from vimeo.

In doing so, data is transmitted to vimeo - please note our privacy policy in this regard

Recent evidence suggests that the brain entrains to linguistic units such as syllables, words, phrases, and sentences simultaneously during speech processing. This is reflected by neural activity at the frequencies at which these linguistic units occur. In this talk, I will discuss studies in which we have used statistical learning paradigms and intracranial recordings to investigate how the brain learns to segment continuous speech into relevant units, and where this takes place. We exposed participants to streams of repeating 3-syllable nonsense words and assessed learning via online neural measures of inter-trial coherence (ITC), quantifying entrainment at the different segmental units (i.e., at the syllabic and the word level). This allowed us to track where learning occurs and at what timescale learning evolves. We observed that neural sources underlying segmentation are broadly distributed, but show selective representation of the syllable and/or word rates (i.e., 4 Hz and 1.33 Hz, respectively). At the syllabic rate, responses are found in areas typically implicated in general auditory processing, such as the superior temporal gyrus, whereas responses at the timescale of words also appear in other association areas, such as frontal cortex. Learning of nonsense words evolved in time, reflected by increasingly stronger ITC at the word level frequency, whereas purely syllabic responses remained stable across time. Furthermore, neural entrainment was observed even when participants performed a distractor task, indicating that explicit attention to the segmentation cues is not necessary to drive learning and entrainment. These studies highlight the power of online neural entrainment measures to unravel not only already acquired linguistic knowledge but also to track its acquisition.