Workshop Music & Eye-Tracking - August 17th-18th, 2017, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany.

Neural Oscillations in Speech and Language Processing

Datum28–31. Mai 2017
Ort

Harnack-Haus der Max-Planck-Gesellschaft, Berlin

Gastgeber

MPI für empirische Ästhetik, Abteilung Nerowissenschaften und
MPI für Kognitions- und Neurowissenschaften, Abteilung Neuropsychologie

Neuere Ergebnisse der auditorischen Neurowissenschaft zeigen, dass neurale Oszillationen sich während der Sprachverarbeitung mit den Rhythmen gesprochener Sprache synchronisieren. Auf höheren Verarbeitungsebenen spiegeln Zyklen kortikaler Erregung und Inhibierung auch Funktionen der syntaktischen und semantischen Verarbeitung wieder. Dieses internationale Symposium bringt führende Sprachforscherinnen und Neurowissenschaftler, die auf dem Feld der neuralen Oszillationen arbeiten, zusammen. Durch intensive Diskussionen und Präsentationen aktueller Arbeiten werden wir die Basis einer einheitlichen Perspektive auf die Rolle neuraler Oszillationen bei der auditorischen Verarbeitung und dem Sprachverständnis legen – vom Phonem bis hin zur Grammatik.

Dieser Workshop findet in englischer Sprache statt.

Introduction

Auditory neuroscience has provided strong evidence that neural oscillations synchronize to the rhythms of speech. Higher up in the hierarchy, cycles of cortical excitation and inhibition would also reflect syntactic parsing and the processing of sentence-level semantics. This international symposium will join leading researchers from the speech and language fields with eminent systems neuroscientists from the field of neural oscillations. Through intense discussions and presentations of exciting new work, we will lay out the basis for a unified perspective on the role of neural oscillations in speech processing and language comprehension—from phonemes to grammar.

Due to space constraints, symposium attendance is largely limited to the speakers and associated laboratory members. Yet, there are some attendance spots left; to inquire on this, please send an email to Alessandro Tavano.

Description

In recent years, auditory neuroscience has provided strong evidence that neural oscillations synchronize to the rhythm of speech stimuli. The idea is that temporal patterns of speech reset the phase of on-going neuronal fluctuations, facilitating speech perception. Higher up in the hierarchy, cycles of cortical excitation and inhibition would also reflect the internal processing of language features, either in a top-down or bottom-up fashion, even extending into assisting syntactic parsing and the processing of sentence-level semantics.

This international symposium will bring together world-leading researchers from the speech and language fields with most eminent systems neuroscientists from the field of neural oscillations—united in the discovery of how neural oscillations subserve cortical information processing. Through intense discussions and presentations of exciting new work, we will lay out the basis for a unified perspective on the role of neural oscillations in speech processing and language comprehension—from phonemes to grammar.

Objectives

Our symposium will bring into focus the oscillatory nature of brain processes as they assist speech and language analysis. The symposium comes at a significant instance where revolutionary insights into the neural oscillations underlying cognitive processes spread quickly across neuroscientific fields, just entering the neuroscience of speech processing and language comprehension. We will benefit the field by providing an early summary of a dynamically emerging literature, working the switches for coherent future research.

Scope

The first topic of the workshop, Experimental Work and Emerging Frameworks, will summarize the current state of the art and unearth fundamental controversial issues:
Beyond traditional methodology, what specific insights can neural oscillations provide to highlight fully new, potentially revolutionary aspects of the neurobiology of speech and language processing?
Are neural oscillations yet-another dependent measure of speech and language processing, or do they have unique explanatory value?

The second topic of the workshop, A Systems-Neuroscience Perspective, will reinforce the coherence of the emerging frameworks and probe their adequacy:

  • Are there neural oscillators specifically processing speech and language? Is such a hypothesis even plausible in the light of the highly stereotypical functionalities fulfilled by neural oscillations throughout cortical regions and network constellations?
  • Do neural time scales provide the adequate granularities to capture the extraction and hierarchical interaction of phonological, syntactic, and semantic information conveyed by natural speech? At which timescale is each hierarchical level best reflected, and which neural mechanisms are there to rapidly integrate each time scale on the fly?

Gimmicks

Selected workshop presentations were recorded, edited and are broadcast via the websites of the Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, and the Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig. You can either watch the videos inline on this page (click "more" below the speakers' name) or on our Vimeo album: https://vimeo.com/album/4809207

While fostering extensive discussion also outside each speaker’s core research domain, the workshop will include a poster session to keep participants in sync with the broader data picture emerging across research laboratories and, just as important, with junior researchers.

Venue

The Neural Oscillations of Speech and Language Processing will take place from May 28–31, 2017, at the Harnack-Haus of the Max Planck Society in Berlin. Founded in 1929 and refurbished in 2000 with the goal to enable outstanding achievement through international collaboration, the Harnack-Haus provides a stimulating and relaxing working atmosphere, including on-site catering and accommodation.

Scientific Chairs

The first part, Experimental Work and Emerging Frameworks, will be chaired by Lars Meyer and Angela D. Friederici from the Department of Neuropsychology at the Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany. The Department of Neuropsychology has an outstanding track record in language neuroscience, including the discovery of language-related event-related brain potentials, the localization of the core language network via functional magnetic resonance imaging, and recent advances into the structural and functional connections within the language network.

The second part, A Systems-Neuroscience Perspective, will be chaired by Alessandro Tavano and David Poeppel from the Department of Neuroscience at the newly established Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany. Based within one of the leading families of neuroscientific research institutions (e.g., the Max Planck Institute for Brain Research; the Ernst Strüngmann Institute; the Frankfurt Institute for Advanced Studies), the Max Planck Institute for Empirical Aesthetics pushes research boundaries by investigating the interaction of our perceptual systems with speech, literature, music, and emotions.

Speakers

Marcel Bastiaansen, PhD   (Video)

NHTV Breda University of Applied Sciences, Breda, NL and Tilburg University, NL | Website | Contact

Concerted action in the brain's language network: unification or prediction?

###

Oscillatory dynamics in scalp EEG and MEG are thought to (at least partially) reflect the underlying dynamics of the coupling and uncoupling of functional neuronal networks that carry cognitive processes. In my presentation I will selectively review recent literature that addresses the oscillatory dynamics that is observed during various aspects of sentence-level language comprehension.
For instance, there is evidence that low-frequency oscillatory dynamics (theta-band power changes) are related to lexical retrieval, whereas high-frequency dynamics (beta / gamma power and coherence changes) are related to sentence-level integration (unification) of the individual lexical items. The available evidence can be taken to suggest that beta-band oscillations are predominantly related to syntactic unification, while whereas gamma-band oscillations index semantic unification. However, we have recently proposed that the observed oscillatory responses might be equally well interpreted in a predictive coding framework.

Dr. Nai Ding   (Video)

College of Biomedical Engineering and Instrument Sciences, Zhejiang University, CN | Website | Contact

Neural Representation of Hierarchical Structures in Speech

###

The most critical attribute of human language is its unbounded combinatorial nature: smaller elements can be combined into larger structures on the basis of a grammatical system, resulting in a hierarchy of linguistic units, such as words, phrases and sentences. Mentally parsing and representing such structures, however, poses challenges for speech comprehension. I will present our recent work which shows that cortical activity of different timescales can concurrently track the time course of abstract linguistic structures at different hierarchical levels, such as words, phrases and sentences. More importantly, neural entrainment to higher level linguistic structures such as phrases and sentences cannot be explained by neural tracking of acoustic features or transitional probability cues between words. I will also discuss preliminary results on the temporal dynamics of neural entrainment to multi-syllable words and how attention differentially modulates neural entrainment to different linguistic structures.

Prof. Dr. Pascal Fries

Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, DE | Website | Contact

Rhythms for Cognition: Communication through Coherence

###

I will show that free viewing induces gamma-band oscillations in early visual cortex. If the gamma rhythm in a lower visual area entrains a gamma rhythm in a higher visual area, this might establish an effective communication protocol: The lower area sends a representation of the visual stimulus rhythmically, and the higher area is most excitable precisely when this representation arrives. At other times, the higher area is inhibited, which excludes competing stimuli. I refer to this scenario as the Communication-through-Coherence (CTC) hypothesis. I will show that the gamma rhythm in awake macaque V4 modulates the gain of synaptic inputs. I will further show that constant optogenetic stimulation in anesthetized cat area 21a (homologue to V4) induces a local gamma rhythm, and that this isolated gamma is sufficient to produce similar gain modulation. These gain modulation effects would be ideal to lend enhanced effective connectivity to attended stimuli. I will show that this is indeed the case between macaque V1 and V4. When two visual stimuli induce two local gamma rhythms in V1, only the one induced by the attended stimulus entrains V4. I will then investigate how these changes in gamma synchronization between visual areas are controlled by influences from parietal cortex. I will show that posterior parietal cortex influences visual areas primarily via beta-band synchronization. I will show that generally, beta-band influences are stronger in the top-down direction, while gamma-band influences are stronger in the bottom-up direction. This holds across macaques and human subjects, and in both species it allows building a hierarchy of visual areas based on the directed influences. Finally, I will show that attentional selection occurs at a theta rhythm. When two objects are monitored simultaneously, attentional benefits alternate at 4 Hz, consistent with an 8 Hz sampling rhythm, sampling them in alternation.

Prof. Oded Ghitza, PhD   (Video)

Biomedical Engineering & Hearing Research Center, Boston University, US | Website | Contact

Oscillation-based models of speech perception: pressing questions

###

Do cortical oscillations play a role in speech perception? And if so, are they active in segmenting speech, decoding it, or both? To date, we have made progress in formulating cortical computation principles that underly segmentation. Models of syllabic segmentation, with theta band activity at the core, are now reasonably explicit and mature. Models of prosodic segmentation, with delta at the core, are not yet well established. For these models to successfully predict psychophysical data, some necessary requirements must be satisfied. I will discuss a list of such requirements, examine the degree to which they are plausible and answerable, and conclude with emerging, pressing questions.

Prof. Anne-Lise Giraud, PhD   (Video)

Department of Neuroscience, University of Geneva, CH | Website | Contact

Speech processing in auditory cortex with and without oscillations

###

Perception of connected speech relies on accurate syllabic segmentation and phonemic encoding. These processes are essential because they determine the building blocks that we can manipulate mentally to understand and produce speech. Segmentation and encoding might be underpinned by specific interactions between the acoustic rhythms of speech and coupled neural oscillations in the theta and low-gamma band. To address how neural oscillations interact with speech, we used a neurocomputational model of speech processing generating biophysically plausible coupled theta and gamma oscillations. We show that speech could be well decoded from this purely bottom-up artificial network’s low-gamma activity, when the phase of theta activity was taken into account. Because speech is not only a bottom-up process, we set out to develop another type of neurocomputational model that takes into account the influence of top-down linguistic predictions on acoustic processing. I will present preliminary results obtained with such a model and discuss the advantage of incorporating neural oscillations in models of speech processing.

Prof. Dr. Joachim Gross

Institute of Neuroscience and Psychology, University of Glasgow, GB | Website | Contact

What can brain oscillations tell us about speech processing?

###

Recent years have seen an increasing interest in studying brain oscillations to discover fundamental principles of speech processing in the human brain. New findings have been met with great enthusiasm but also criticism.  In my presentation I will review some of the recent developments pertaining to the question what brain oscillations can tell us about speech processing. I will critically discuss advantages and pitfalls in the use of spectral analysis for studying speech processing. Arguments will be based on conceptual and technical considerations. Finally, I will summarise new findings from our ongoing projects.

Dr. Simon Hanslmayr, PhD   (Video)

School of Psychology, University of Birmingham, GB | Website | Contact

Searching for memory in brain waves – The synchronization/desynchronization Conundrum

###

Brain oscillations have been proposed to be one of the core mechanisms underlying episodic memory. But how do they operate in the service of memory? Reviewing the literature a conundrum emerges as some studies highlight the role of synchronized oscillatory activity, whereas others highlight the role of desynchronized activity. In this talk I will describe a framework that potentially resolves this conundrum and integrate these two opposing oscillatory behaviours. I will present results from studies using different techniques to study oscillations (i.e. from EEG/MEG, EEG-fMRI, to human single unit and LFP recordings) and argue, based on these findings, that the synchronization and desynchronization reflect a division of labour between a hippocampal and a neocortical system, respectively. Specifically, whereas desynchronization is key for the neocortex to represent information, synchronization in the hippocampus is key to bind information. This novel oscillatory framework integrates synchronization and desynchronization mechanisms in order to explain how the two systems (i.e. neocortex and hippocampus) interact in the service of episodic memory. Finally, I will discuss open questions, specific predictions and challenges that follow from this framework.

Prof. Dr. Christoph S. Herrmann   (Video)

Carl von Ossietzky Universität Oldenburg, DE | Website | Contact

Dr. Anna Wilsch

Carl von Ossietzky Universität Oldenburg, DE

Transcranial current stimulation with speech envelopes enhanced intelligibility

###

Cortical entrainment of the auditory cortex to the broadband temporal envelope of a speech signal is crucial for speech comprehension. Entrainment results in phases of high and low neural excitability which structure and decode the incoming speech signal. Entrainment to speech is strongest in the theta frequency range (4–8 Hz), the average frequency of the speech envelope. If a speech signal is degraded, entrainment to the speech envelope is weaker and speech intelligibility declines. Besides perceptually evoked cortical entrainment, transcranial alternating current stimulation (tACS) entrains neural oscillations by applying an electric signal to the brain. Accordingly, tACS-induced entrainment in auditory cortex has been shown to improve auditory perception. The aim of the current study was to modulate speech intelligibility externally by means of tACS such that the electric current corresponds to the envelope of the presented speech stream. Participants performed the Oldenburg sentence test with sentences presented in noise in combination with tACS. Critically, tACS was stimulated with time-lags of 0 to 250 ms in 50-ms steps relative to sentence onset (auditory stimuli were simultaneous to or preceded tACS). We were able to show that envelope-tACS modulated sentence comprehension such that sentence comprehension at the time-lag of the best performance was significantly better than at the time-lag of the worst performance. Interestingly, sentence comprehension across time lags was modulated sinusoidally. Altogether, envelope tACS modulates intelligibility of speech in noise presumably by enhancing and disrupting (time-lags resulting in in- or out-of-phase stimulation, respectively) cortical entrainment to the speech envelope in auditory cortex.

Prof. Ole Jensen, PhD   (Video)

Centre for Human Brain Health, University of Birmingham, GB | Website | Contact

Coupling between frontal gamma and posterior alpha oscillations support language prediction

###

Predictions during language perception might serve to support the integration of incoming words into sentence context and thus rely on interactions between areas in the language network. We have conducted MEG studies in which participants read sentences that varied in contextual constraints thus manipulating the predictability of the sentence-final words. Just prior to the sentence-final words, we observed stronger alpha power suppression for the highly predictable words in left inferior frontal cortex, left posterior temporal region, and visual word form area (VWFA). Importantly, the temporal and VWFA alpha power correlated negatively with left frontal gamma power for the highly constraining sentences. We suggest that these findings reflect the initiation of an anticipatory unification process in the language network.

Prof. Christoph Kayser, PhD   (Video)

Institute of Neuroscience and Psychology, University of Glasgow, GB | Website | Contact

Visual facilitation of auditory encoding and the role of slow network activity

###

Seeing a speaker’s face enhances speech intelligibility in adverse environments. From many previous studies we know that slow rhythmic network activity likely plays a role in this process. I here I first review studies demonstrating a direct link between local rhythmic activity in auditory cortex and the excitability of individual neurons. Furthermore, I explore recent evidence demonstrating a direct link between the state of pre-stimulus activity and subjects’ perceptual performance. I then review to recent MEG studies investigating the network mechanisms underlying the visual facilitation of speech perception. In one study we quantified local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. We found that during high acoustic SNR speech encoding by entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioural benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. These results demonstrate a role of auditory-motor interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments.

Prof. Nancy Kopell, PhD

Department of Mathematics & Statistics, Boston University, US | Website | Contact

Gamma, Beta and Predictions

###

A violation of expectation can lead to an increase or a decrease in the power of brain rhythms (gamma and beta). This talk discusses physiological mechanisms that could underlie the opposite outcomes, as well as potential implications for speech processing.

Prof. Peter Lakatos, MD, PhD   (Video)

Dynamical Cognitive Neuroscience Lab, Nathan Kline Institute, Orangeburg, US | Website | Contact

Dynamics and function of neuronal oscillatory entrainment in the auditory system

###

While we have convincing evidence that attention to auditory stimuli modulates neuronal responses even at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms were, until recently, unknown. Our earlier studies have demonstrated that perpetually ongoing, rhythmic excitability fluctuations of neuronal ensembles can actually track the timing of stimuli within rhythmic stimulus sequences or streams, by synchronizing to the rhythm of the streams. The mechanism enabling this dynamic tracking is termed oscillatory entrainment. After a short introduction to neuronal oscillations, I will present results of our recent studies showing that topographically organized neuronal oscillations entrained across all A1 neuronal ensembles in the non-human primate act as a spectrotemporal filter mechanism of selective attention. This two dimensional filter is organized in cortical space and time to match the properties of attended rhythmic stimuli. As a consequence, it amplifies attended and suppresses unattended frequency content at specific time points, which at the same time stabilizes the sensory representation of attended stimuli. Several recent studies from our lab and others indicate that the same mechanism aids in the segregation and perception of attended speech.

Andrea E. Martin, PhD   (Video)

School of Philosophy, Psychology, and Language Sciences, University of Edinburgh, GB & Max Planck Institute for Psycholinguistics, Nijmegen, NL | Website | Contact

Linking linguistic and cortical computation via hierarchy and time

###

Human language is a fundamental biological signal with computational properties that differ from other perception-action systems: hierarchical relationships between words, phrases, and sentences, and the unbounded ability to combine smaller units into larger ones, resulting in a "discrete infinity" of expressions. These properties have long made language hard to account for from a biological systems perspective and within models of cognition. One way to begin to reconcile the language faculty with both these domains is to hypothesize that, when hierarchical linguistic representation became an efficient solution to a computational problem posed to the organism, the brain repurposed an available neurobiological subroutine. Under such an account, a single mechanism must have the capacity to perform multiple, functionally-related computations, e.g., detect and represent the linguistic signal, and perform other cognitive functions, while, ideally, oscillating like the human brain. We show that a well-supported symbolic-connectionist model of analogy (Discovery Of Relations by Analogy; Doumas, Hummel, & Sandhofer, 2008) oscillates while processing sentences - despite being built for an entirely different purpose (learning relational concepts and performing analogical reasoning). The model processes hierarchical representations of sentences, and while doing so, it exhibits oscillatory patterns of activation that closely resemble the human cortical response to the same stimuli (cf. Ding, Melloni, Zhang, Tian, & Poeppel, 2016). From the model, we derive an explicit computational mechanism for how the brain could convert perceptual features into hierarchical representations across multiple timescales, providing a linking hypothesis between linguistic and cortical computation. We argue that this computational mechanism – using time to encode hierarchy across a layered network, while preserving (de)compositionality – can satisfy the computational requirements of language, in addition to performing other cognitive functions. Our results suggest a formal and mechanistic alignment between representational structure building and cortical oscillations that has broad implications for discovering the first principles of linguistic computation in the human brain.

Prof. Lucia Melloni, PhD   (Video)

Department of Neurology, NYU School of Medicine, US and Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt, DE | Website | Contact

Learning to uncover structure: insights from intracranial EEG

###

Recent evidence suggests that the brain entrains to linguistic units such as syllables, words, phrases, and sentences simultaneously during speech processing. This is reflected by neural activity at the frequencies at which these linguistic units occur. In this talk, I will discuss studies in which we have used statistical learning paradigms and intracranial recordings to investigate how the brain learns to segment continuous speech into relevant units, and where this takes place. We exposed participants to streams of repeating 3-syllable nonsense words and assessed learning via online neural measures of inter-trial coherence (ITC), quantifying entrainment at the different segmental units (i.e., at the syllabic and the word level). This allowed us to track where learning occurs and at what timescale learning evolves. We observed that neural sources underlying segmentation are broadly distributed, but show selective representation of the syllable and/or word rates (i.e., 4 Hz and 1.33 Hz, respectively). At the syllabic rate, responses are found in areas typically implicated in general auditory processing, such as the superior temporal gyrus, whereas responses at the timescale of words also appear in other association areas, such as frontal cortex. Learning of nonsense words evolved in time, reflected by increasingly stronger ITC at the word level frequency, whereas purely syllabic responses remained stable across time. Furthermore, neural entrainment was observed even when participants performed a distractor task, indicating that explicit attention to the segmentation cues is not necessary to drive learning and entrainment. These studies highlight the power of online neural entrainment measures to unravel not only already acquired linguistic knowledge but also to track its acquisition.

Dr. Lars Meyer   (Video)

Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, DE | Website | Contact

The Purpose of Synchronicity: Neural Oscillations Align Excitability with Linguistic Informativeness

###

In auditory neuroscience, there is emerging consensus that neural oscillations phase-synchronize with frequency-isomorphous acoustic and linguistic rhythms in speech. Yet, the tracking of certain rhythms is not language comprehension—which requires the understanding of the syntactic and semantic information that speech actually symbolizes. I will present here evidence that oscillatory synchronicity may indeed facilitate linguistic information processing. First, I will show that delta-band oscillatory phase can drive sentence interpretations that contradict acoustic cues. Second, I will show that oscillatory synchronicity aligns neural excitability (as indexed by delta-band oscillatory phase and ERPs) with linguistic informativeness (as quantified by information-theoretic metrics)—facilitating language comprehension (as measured through a high-level linguistic task). I will argue that phase-synchronization during speech processing is not a self-contained mechanism to ensure high-fidelity rhythm tracking; instead, phase-synchronization serves to optimally align information extraction capacities with linguistic information.

Prof. Dr. Jonas Obleser   (Video)

Department of Psychology, University of Lübeck, DE | Website | Contact

Comprehending the patterns: Patterns of comprehension

###

The renewed interest in neural oscillations and the methodological advances in characterising them has flooded us with empirical results. A clear pattern of frequency-to-function assignment is yet to emerge, however: It remains tempting to speak of brain rhythms like neuroimaging enthusiasm made us speak of brain regions fifteen years ago (“The superior temporal sulcus does X”, and “theta does Y”). In my view, it is not only foreseeably wrong but also unnecessary that speech and language science repeat these same mistakes as we venture deeper into analysing and interpreting electrophysiological signals in the time–frequency domain. For this workshop, I would like to discuss (and exemplify using the attentive, speech-comprehending brain) how a more parsimonious, neurophysiologically grounded framework should at least help us in avoiding some of these epistemological traps. While trying not to fall prey to the criticism outlined above, I will aim at synthesising evidence from our and other labs on how attentive listening and speech comprehension in all likelihood manifests in at least two, distinct processing modes or patterns: Slow-oscillatory “entrainment” (presented alongside a more careful definition of what we should denote by that and what not) trading off against faster (alpha/beta) “modulation”, or goal-driven inhibition.

Vitória Magalhães Piai, PhD   (Video)

Radboud University, Donders Institute for Brain, Cognition and Behaviour and Radboud University Medical Center, Department of Medical Psychology, Nijmegen, NL | Website | Contact

Oscillations as a bridge between language and other cognitive domains

###

Language comprehension and production rely on memory and control processes. In addition, the motor system is recruited for language production. Memory, motor, and control processes are well-studied outside of the language domain. I will argue that oscillations may constitute the best measure to understand language in relation to these other domains. In particular, I will discuss alpha-beta oscillations in the lateral cortex with respect to memory and motor aspects of word production. Additionally, I will show how hippocampal theta oscillations, which are tightly related to episodic memory, track the amount of semantic associations in sentence contexts. I will also show how resolving competition between words in language production is associated with theta oscillations in the medial frontal cortex, a signature of executive function. Finally, I will discuss what these neuronal signatures can reveal about language lateralisation and neuroplasticity in patient populations.

Dr. Alessandro Tavano   (Video)

Max Planck Institute for Empirical Aesthetics, Frankfurt, DE | Website | Contact

Internal rhythms to sentences

###

Cross-linguistic evidence from English and Mandarin Chinese suggests that low-frequency cortical rhythms (~delta band, 1-4 Hz) can track the constituent structure in a linguistic hierarchy, e.g. phrasal and sentential organization (Ding et al., 2016). Here we extend this paradigm to German and also investigate frequencies below 1 Hz. Stimuli were sentences comprised of four monosyllabic words, five monosyllabic words, and five disyllabic words delivered at a constant rate (4 Hz for monosyllabic words, and 2 Hz for disyllabic words) and concatenated in blocks of ten sentences. Phrase and sentence boundaries were not marked by acoustic cues. Participants detected oddball blocks which contained ungrammatical sentences, interspersed with regular blocks containing only correct sentences. Using spectral analyses on EEG data from each regular block, we find that cortical rhythms reliably integrate sentence-based rhythms also in the low delta band: 0.8 Hz and 0.4 Hz for monosyllabic and disyllabic five-word sequences, respectively. Surprisingly, both these rhythms displayed a clear harmonic structure, supporting the hypothesis of internal rhythmogenesis via harmonic oscillators. We further tested the findings using two attention manipulations: 1) attention to stimuli vs. attention away from stimuli and 2) detection of voice pitch vs. grammaticality oddballs. Our data provide a broader empirical base for understanding the role of brain rhythms in tracking natural spoken language.

Prof. Rufin VanRullen, PhD   (Video)

CerCo-CNRS, Toulouse, FR | Website | Contact

Perceptual cycles in vision and audition

###

Recent (and less recent) evidence suggests that visual perception and attention are intrinsically rhythmic. For example, in a variety of visual tasks, the trial-by-trial outcome was found to depend on the precise phase of spontaneous pre-stimulus EEG oscillations in specific frequency bands (between 7 and 15Hz). This implies that there are “good” and “bad” phases for visual perception and attention; in other words, visual perception and attention proceed as a succession of ongoing cycles. On the other hand, auditory processing does not appear to be shaped in a similar way by spontaneous brain oscillations. Particularly in the context of speech processing, where the rhythmic structure of the inputs carries important information, neural oscillations are dynamically adjusted to this input structure. As a result, auditory perceptual cycles, if they exist, would not just shape sensory perception (as in vision), but also be shaped by it.

Prof. Virginie van Wassenhove, PhD   (Video)

Cognitive Neuroimaging Unit, CEA DRF/I2BM, INSERM, Paris-Sud University, Paris-Saclay University, NeuroSpin center, Gif/Yvette, FR | Website | Contact

Neural oscillations: reconciling timing and meaning

###

Neural oscillations have been implicated in various cognitive functions, highlighting their logistical relevance in timing cognition. In humans, cortical oscillations may parse continuous speech signals into computational units (e.g. syllables or words) necessary for speech comprehension. Neural oscillations may serve as natural parsers for bottom-up acoustic parsing and may also be top-down modulated by available linguistic representations. Using magnetoencephalography, we contrasted acoustic and linguistic parsing using bistable speech sequences: while listening to speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. The tracking of speech dynamics by neural oscillations was predict to not solely follow the acoustic properties but perhaps shift in time according to participant’s conscious speech percept. Our results showed two dissociable markers of neural-speech tracking under endogenous control: small modulations in low-frequency oscillations and variable latencies of high-frequency activity (sp. beta and gamma bands). While changes in low-frequency neural oscillations are compatible with the encoding of pre-lexical segmentation cues, high-frequency activity informed on an individual’s conscious speech percept. These and other results will help feed a discussion on the functional role of neural oscillations in representing meaning and/or time.

Dr. Elana Zion-Golumbic

The Gonda Brain Research Center, Bar Ilan University, Ramat Gan, IL | Website | Contact

Depth of Processing for Unattended Speech

###

Speech processing requires analysis of auditory input at different acoustic and linguistic levels. A key question is which levels of speech processing require attention, and what is the depth of processing applied to unattended speech. To address this question, we used the recently developed Concurrent Hierarchical Tracking approach (CHT; Ding et al. 2016), which differentiates the neural signature of responses to distinct acoustic and linguistic levels within speech stimuli – syllables, words, sentences and phrases – by presenting them at unique frequencies. We employed this experimental approach in order to probe which linguistic levels are represented in the neural response to speech under different states of inattention. In this talk I will discuss data pertaining to the effects of task-relevance and speaker-relevance on the depth of speech processing as well as a comparison between states wakefulness vs. sleep, as an extreme case of inattention. These studies provide new insights regarding the functional bottlenecks imposed on linguistic processing and the attentional resources necessary for lexical, syntactic and semantic processing of speech.

Contact

Alessandro Tavano

Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany

+49 69 8300479-321

alessandro.tavano@ae.mpg.de

Organized by

Dr. Lars Meyer, M. Sc.

Scientific Researcher
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

+49 341 9940-2266

lmeyer@cbs.mpg.de

Dr. Alessandro Tavano

Scientific Researcher
Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany

+49 69 8300479-321

alessandro.tavano@ae.mpg.de

Prof. Dr. Dr. h.c. Angela D. Friederici

Director
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

+49 341 9940-112

friederici@cbs.mpg.de

Prof. David Poeppel, PhD

Director
Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany

+49 69 8300479-301

cordula.ullah@ae.mpg.de

Downloads

  • ProgramProgram "NO17" (PDF, Size A4, single pages)260 KB
  • PosterPoster "NO17" (PDF, Size A1)2 MB