28. Mai 2017

Prof. Anne-Lise Giraud, PhD

Bitte klicken Sie hier, um das vorgesehene Video von Vimeo zu laden.

Dabei werden Daten an Vimeo übermittelt — beachten Sie diesbezüglich unsere Datenschutzerklärung

Perception of connected speech relies on accurate syllabic segmentation and phonemic encoding. These processes are essential because they determine the building blocks that we can manipulate mentally to understand and produce speech. Segmentation and encoding might be underpinned by specific interactions between the acoustic rhythms of speech and coupled neural oscillations in the theta and low-gamma band. To address how neural oscillations interact with speech, we used a neurocomputational model of speech processing generating biophysically plausible coupled theta and gamma oscillations. We show that speech could be well decoded from this purely bottom-up artificial network’s low-gamma activity, when the phase of theta activity was taken into account. Because speech is not only a bottom-up process, we set out to develop another type of neurocomputational model that takes into account the influence of top-down linguistic predictions on acoustic processing. I will present preliminary results obtained with such a model and discuss the advantage of incorporating neural oscillations in models of speech processing.