Music and Language in Song

Music and language are processed in large-scale fronto-temporo-parietal neural networks in the left and right hemisphere. What remains unclear, however, is how these networks relate to each other, particularly when language and music are fused—as in song. How does the brain dissociate melody and text in the audio signal (Sammler, 2020)? And to what extent does it integrate structure, meaning, and affective tone between the two streams (Sammler, Baird et al., 2010; Alonso et al., 2014)? Building on insights from Research Area 1 on crosstalk between prosody and language networks (Sammler et al., 2018), Research Area 3 takes a closer look into cross-stream interactions during song perception. The extent to which music-language alignment in songs benefits speech perception (Torppa et al., 2020) and how it shapes aesthetic experience, are further topics of interest.

Featured Publications

Sammler, D. (2020). Splitting speech and music. Science, 367(6481), 974–976.

Alonso, I., Sammler, D., Valabrègue, R., Dinkelacker, V., Dupont, S., Belin, P., Samson, S. (2014). Hippocampal sclerosis affects fMR-adaptation of lyrics and melodies in songs. Frontiers in Human Neuroscience, 8, 111.

Sammler, D., Baird, A., Valabrègue, R., Clément, S., Dupont, S., Belin, P., Samson, S. (2010). The relationship of lyrics and tunes in the processing of unfamiliar songs: An fMR adaptation study. The Journal of Neuroscience, 30, 3572–3578.