Sounds and meanings
Whether it is music, speech, screams, or environmental sounds, we categorize acoustic information that unfolds over time to make sense of sounds.
In our research group, we investigate this crucial but complex phenomenon. Our projects center on the acoustic description of categories (or mental representations of something abstract, such as the concept of music, voice quality, or talkers' intentions) and how they are processed. We examine listeners' perception of both natural and manipulated sequences, with methods from psychophysics, statistical modeling, and electrophysiology.
Research Focus
Definition of a "music" category
Across the globe, the phenomena and practices the English word music addresses have some common features (e.g., Savage et al., 2015), but music is also recognised as a construct that is sensitive to cultural and historical aspects as well as to individual differences.
Borders of "humanness" category
Integrating synthetic voice interfaces into devices is on the rise in numerous settings, from living rooms to classrooms and care-facilities. Despite tremendous technical progress, synthetic voices still lack "humanity," in particular when it comes to prosody (“how” something is spoken, in addition to what is spoken) and conversational elements, such as interjections or non-verbal vocalizations.
Inferring meaning from vocal expressions
Humans are, generally speaking, quite good at inferring meaning from nonverbal expressions conveyed by face, body, or voice. Whereas the nature of acoustic features of contrasted prosodic signals has attracted a lot of attention in the last decades (particularly since Banse & Scherer, 1996), the communication of emotions/intentions remains poorly understood.
Neurophysiological tracking of musical structure
Music, like speech, can be considered as a continuous stream of sounds organized in hierarchical structures. Human listeners parse continuous speech into linguistic units of phrases and sentences.
Music-language categories
Although we quickly identify sounds as music or speech, how we form such abstract categories is not so clear. Also, continuous streams of sounds are parsed in units of different length yet to be defined.
Singing voice preferences
As suggested by the many singing contests and music programs in the media, the singing voice attracts ample attention. Recent studies showed that Western lay and expert listeners share similar definitions of what is “correct” when listening to untrained (Larrouy-Maestri et al., 2015) and trained singers (Larrouy-Maestri et al., 2017).
Inferring meaning from prosody
The tone of the voice carries information about the emotional state or intention of a speaker. Whereas the nature of acoustic features of contrasted prosodic signals has attracted a lot of attention in the last decades (particularly since Banse & Scherer, 1996), the communication of emotions/intentions remains poorly understood.
Selected Publications
Bruder, C., Poeppel, D., & Larrouy-Maestri, P.(2024). Perceptual (but not acoustic) features predict singing voice preferences. Scientific Reports, 14(1), 8977. doi:10.1038/s41598-024-58924-9
Tan, Y., Sun, Z., Teng, X., Larrouy-Maestri, P., Duan, F., & Aoki, S. (2024). Effective network analysis in music listening based on electroencephalogram. Computers and Electrical Engineering, 117, 109191. doi: 10.1016/j.compeleceng.2024.109191
Larrouy-Maestri, P., Poeppel, D., & Pell, M. (2024) The sound of emotional prosody: Nearly three decades of research and future directions. Perspective on Psychological Science. doi:10.1177/17456916231217722
Larrouy-Maestri, P., & Wald-Fuhrmann, M. (Preregistration 2022 & 2023): osf.io/sb24q, osf.io/azkcp
Fink, L., Hörster, M., Poeppel, D., Wald-Fuhrmann, M., & Larrouy-Maestri, P. (submitted). Semantic and acoustic features underlying speech-music categories.
Bruder, C., Poeppel, D., & Larrouy-Maestri, P. (2023). Perceptual (but not acoustic) features predict singing voice preferences [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/qvp8t
van Rijn, P., Poeppel, D., & Larrouy-Maestri, P. (2023). Contribution of Pitch Measures over Time to Emotion Classification Accuracy. [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/pnysd
Larrouy-Maestri, P.. Kegel, V., Schlotz, W., van Rijn, P., Menninghaus, W., & Poeppel, D. (2023). The meaning of spoken sentences can be twisted by a forward shift of prosodic stress. Journal of Experimental Psychology: General
van Rijn, P., & Larrouy-Maestri, P. (2023). Modeling individual and cross-cultural variation in the mapping of emotions to speech prosody. Nature Human Behavior. doi: 10.1038/s41562-022-01505-5
Bruder, C., & Larrouy-Maestri, P. (2023). Classical singers are also proficient in non-classical singing. Frontiers in Psychology,14: 1215370. doi:10.3389/fpsyg.2023.1215370.
Hołubowska, Z., Teng, X., Larrouy-Maestri, P. (2022, Conference). Effect of regularity on the behavioral and neural tracking of musical phrases.
Bruder, C., Poeppel, D. & Larrouy-Maestri, P. (2022, August). The role of typicality in singing voice preferences [Paper presentation]. 14th. Pan European Voice Conference (PEVoC), Tallinn, Estonia
Holz, N., Larrouy-Maestri, P. & Poeppel, D. (2022). The variably intense vocalizations of affect and emotion (VIVAE) corpus prompts new perspective on nonspeech perception. Emotion, 22(1), 213–225. doi: 10.1037/emo0001048
Larrouy-Maestri, P., Poeppel, D., & Pfordresher. (2022). Pitch units in music and speech prosody. In How language speaks to music: prosody from a cross-domain perspective, Richard Wiese, Mathias Scharinger Eds. doi: 10.1515/9783110770186
Bruder, C., Poeppel, D. & Larrouy-Maestri, P. (2022, August). The role of typicality in singing voice preferences [Paper presentation]. 14th. Pan European Voice Conference (PEVoC), Tallinn, Estonia
Larrouy-Maestri, P., Poeppel, D., & Pfordresher, P. Q. (2022). Pitch units in music and speech prosody. In How Language Speaks to Music: Prosody from a Cross-domain Perspective (pp. 17-41). De Gruyter.
Bruder, C., Jacoby, N., Poeppel, D., & Larrouy-Maestri, P. (2021, November). What makes a singer your favorite one? [Oral presentation]. International Conference of Students of Systematic Musicology (SysMus21), Online/Aarhus, Denmark.
Bruder, C., Jacoby, N., Poeppel, D., & Larrouy-Maestri, P. (2021, July). Predicting aesthetic ratings from the acoustics of sung melodies [Oral presentation]. 16th International Conference on Music Perception and Cognition and 11th Triennial Conference of the European Society for the Cognitive Sciences of Music (ICMPC-ESCOM2021), online.
Teng, X., Larrouy-Maestri, P., & Poeppel, D. (2021). Musical phrasal segmentation and structural prediction underpinned by neural modulation and phase precession at ultra-low frequencies. [Preprint] doi: 10.1101/2021.07.15.452556
Durojaye, C. Fink, L., Roeske, T., Wald-Fuhrmann, M., Larrouy-Maestri, P. (2021). Perception of Nigerian Dùndún Talking Drum Performances as Speech-Like vs. Music-Like: The Role of Familiarity and Acoustic Cues, Frontiers in Psychology, 12:652673. doi: 10.3389/fpsyg.2021.652673
Holz, N., Larrouy-Maestri, P. & Poeppel, D. (2021). The paradoxical role of emotional intensity in the perception of vocal affect. Scientific Reports 11, 9663. doi: 10.1038/s41598-021-88431-0