Music and language (M&L) are uniquely human capacities. Often their boundaries are blurred—in song and poetry, or in the melodic way we communicate with infants; and most certainly, M&L share a number of design features, such as their “syntactic” structures and dynamics of rhythm and melody. Although M&L are by no means identical, they may be similar enough for the brain to apply similar “solutions” to their perception and production—shared solutions that may account for music–language synergies and hold promise for rehabilitation and pedagogy. The degrees to which human cognition dissociates or integrates M&L, and how M&L are grounded neurally and linked both cognitively and aesthetically, are questions that the Research Group Neurocognition of Music and Language aims to tackle. To that end, we combine modern neuroscientific methods (e.g., fMRI, M/EEG, and TMS) with perspectives from linguistics, music theory, and cognitive psychology; systematically deconstruct and compare perceptual, cognitive, and expressive stages of M&L processing; and explore their underlying neural networks and inner dynamics.
Our research covers four core fields of inquiry:
- Prosody, Music, and Language
- Music and Language as Combinatorial Systems
- Music and Language in Song
- Music and Language in (Inter)Action
Melody and rhythm are core elements of music that also play multiple roles in language—in the form of prosody. Prosody conveys linguistically relevant information, reveals a speaker’s emotions and intentions, and constitutes a system of aesthetic devices in poetry. How prosodic signals are processed neurally, and how this compares to the processing of music, is the field of inquiry of Research Area 1.
Everywhere in the world, people bind music and language into song. What seems easy on the surface is in fact a complex cognitive task involving intense information exchange within and between the two brain hemispheres. Research Area 3 zooms into the neural bases of song and singing to elucidate how the brain resolves the entwinement of melody and text, and to explore cognitive and aesthetic effects of their alignment.
Both music and language are rule-based arrangements of discrete elements, such as words or chords. How the brain comes to represent these elements and bind them into structurally meaningful sequences, and how these processes compare between domains, are questions to be pursued in Research Area 2.
Both speaking and making music are complex audio-motor tasks; and when undertaken in interaction with others, they require a great deal of interpersonal coordination. Research Area 4 of the Group focuses on the productive side of music and language and investigates questions of action planning and coordination.