The Department of Neuroscience works primarily on the neurobiological foundations of speech perception, language processing, auditory cognition, and music, including the dimensions of aesthetic experience.
The main methods employed include electrophysiological recordings using magnetoencephalography (MEG), electroencephalography (EEG), and electrocorticography (ECoG), as well as imaging studies using structural and functional magnetic resonance imaging (MRI). Our neuroscience-focused studies typically include a wide range of behavioral and psychophysical approaches, as well. In general, the approach is one of “methodological pluralism” – that is to say, we use the methodology is most suited to address a given question. The research questions are motivated by issues arising from neurobiology, psychology, and theoretical, computational, and psycholinguistics.
The neuroscience department works primarily on the neurobiological foundations of speech perception, language processing, auditory cognition, and music, including the dimensions of aesthetic experience. The main methods employed include electrophysiological recordings using magnetoencephalography (MEG), electroencephalography (EEG), and electrocorticography (ECoG), as well as imaging studies using structural and functional magnetic resonance imaging (MRI). Our neuroscience-focused studies typically include a wide range of behavioral and psychophysical approaches, as well. In general, the approach is one of “methodological pluralism” – that is to say, we use the methodology is most suited to address a given question. The research questions are motivated by issues arising from neurobiology, psychology, and theoretical, computational, and psycholinguistics.
The cognitive neurosciences of language and music face empirical and theoretical challenges. Most current research, dominated by neuroimaging and electrophysiological techniques, seeks to identify regions that underpin aspects of processing (such as phonology, syntax, or semantics in language; or rhythm and timbre in music). The emphasis lies primarily on localization of function and characterization of electrophysiological response properties. There exist practical challenges that arise in the context of such a research program, for example obtaining the highest resolution data to generate adequate functional anatomic maps. This “maps problem” concerns the extent to which functional anatomy ultimately satisfies the explanatory needs for perception and cognition. The neural bases of speech, language, and music are, notably, typically discussed in those terms (i.e. local brain regions, processing streams, cerebral hemispheres, cortical networks).
The second challenge is more formidable, namely how to formulate the links between neurobiology and cognition. How do we characterize the relation between the primitives (or the elementary parts) of speech, language, or music and the primitives of neurobiology? Dealing with this “mapping problem” invites the development of linking hypotheses. The cognitive sciences provide granular, theoretically motivated claims about the structure of various domains (the “primitives” or the “cognome” or the “parts list”); neurobiology, similarly, provides a parts list of the available neural structures and functions. However, explanatory connections will require crafting computationally explicit linking hypotheses at the right level of granularity.
For both the practical “maps problem” and the principled “mapping problem”, embracing interdisciplinary approaches and sources of evidence helps formulate better hypotheses to understand how the brain makes possible language and music, two of the most fundamental aspects of human experience.
What neuronal and cognitive representations and computations form the basis for the transformation from “vibrations in the ear” (sounds) to “abstractions in the head” (words)? Successful communication using spoken language requires a speech processing system that can negotiate between the demands of auditory perception and motor outputs, on the one hand, and the representational and computational requirements of the language system, on the other.
The perception of dynamically changing signals, the very basis of listening to language or music, or seeing naturalistic visual scenes, requires an analysis of the temporal information that forms (part of) the basis of such signals. What are the temporal primitives that underlie their perceptual analysis? How is incoming information temporally “sampled”? What type of temporal information is necessary to experience, say, rhythm, or syllable duration, or temporal intervals, or change in a sequence?
This research area takes a neurobiological view of "the aesthetic granularity problem.” What are the "atoms of aesthetic experience," as viewed from human neuroscience? Experiencing a single musical note or one word is arguably too small a unit of analysis; experiencing an entire symphony or whole novel is arguably too big. What constitutes an "aesthetic primitive," from a brain’s-eye-view?
Neuronal oscillations are believed to play a role in various perceptual and cognitive tasks, including attention, navigation, memory, motor planning, and - most relevant in the context of the present work - spoken-language comprehension. The specific computational functions of neuronal oscillations are uncertain. We aim to elucidate how these ubiquitous neurophysiological attributes may underpin speech, language, and music processing.
Many recent theories of perception and cognition suggest that the brain uses internal models of the world to predict forthcoming events. There exists compelling evidence from a wide range of studies that prediction occurs during language comprehension and listening to music, as well. A successful system of this type needs to predict the content of future events (‘what’) but also event timing (‘when’).
van Rijn, P., & Larrouy-Maestri, P. (2023). Modelling individual and cross-cultural
variation in the mapping of emotions to speech prosody. Nature Human Behaviour. doi:10.1038/s41562-022-01505-5.
Notter, M. P., Herholz, P., Costa, S. D., Gulban, O. F., Isik, A. I., Gaglianese, A., & Murray, M. M. (2022).
fMRIflows: A Consortium of Fully Automatic Univariate and Multivariate fMRI Processing Pipelines.
Brain Topography. doi:10.1007/s10548-022-00935-8.
Sierra, F., Muralikrishnan, R., Poeppel, D., & Tavano, A. (2022). A perceptual
glitch in serial perception generates temporal distortions. Scientific Reports,12:
Chormai, P., Pu, Y., Hu, H., Fisher, S. E., Francks, C., & Kong, X.-Z. (2022).
Machine learning of large-scale multimodal brain imaging data reveals neural correlates of hand preference.
NeuroImage,262: 119534. doi:10.1016/j.neuroimage.2022.119534.
PuRe PDF[File 2]
Golbabaei, S., Christensen, J. F., Vessel, E. A., Kazemian, N., & Borhani, K. (2022).
The Aesthetic Responsiveness Assessment (AReA) in Farsi language: A scale validation and cultural adaptation study.
Psychology of Aesthetics, Creativity, and the Arts. Advance online publication. doi:10.1037/aca0000532.
Lubinus, C., Einhäuser, W., Schiller, F., Kircher, T., Straube, B., & van Kemenade, B. M. (2022).
Action-based predictions affect visual perception, neural processing, and pupil size, regardless of temporal predictability. NeuroImage,263: 119601. doi:10.1016/j.neuroimage.2022.119601.
Vessel, E. A., Ishizu, T., & Bignardi, G. (2022). Neural correlates of visual aesthetic
appeal. In M. Skov, & M. Nadal (
Speech is Special and Language is Structured
Please click here to download the intended video from vimeo.
#84 György Buzsáki and David Poeppel
September 15, 2020 | Brain-inspired
Ep 49 - Language: Constructing Knowledge Beyond Words with Dr. David Poeppel
January 3, 2020
#46 David Poeppel: From Sounds to Meanings
September 8, 2019 | Brain-inspired
Episode 15: David Poeppel on Thought, Language, and How to Understand the Brain
September 24, 2018 | Thinking