28. Mai 2017

Dr. Elana Zion-Golumbic

Speech processing requires analysis of auditory input at different acoustic and linguistic levels. A key question is which levels of speech processing require attention, and what is the depth of processing applied to unattended speech. To address this question, we used the recently developed Concurrent Hierarchical Tracking approach (CHT; Ding et al. 2016), which differentiates the neural signature of responses to distinct acoustic and linguistic levels within speech stimuli – syllables, words, sentences and phrases – by presenting them at unique frequencies. We employed this experimental approach in order to probe which linguistic levels are represented in the neural response to speech under different states of inattention. In this talk I will discuss data pertaining to the effects of task-relevance and speaker-relevance on the depth of speech processing as well as a comparison between states wakefulness vs. sleep, as an extreme case of inattention. These studies provide new insights regarding the functional bottlenecks imposed on linguistic processing and the attentional resources necessary for lexical, syntactic and semantic processing of speech.