The tone of the voice carries information about the emotional state or intention of a speaker. Whereas the nature of acoustic features of contrasted prosodic signals has attracted a lot of attention in the last decades (particularly since Banse & Scherer, 1996), the communication of emotions/intentions remains poorly understood. Also, most of listeners seem to share the ‘code’ (or interpret adequately a prosodic signal) to access emotions/intentions of speakers but misunderstandings easily occur.
This project focuses on the cognitive processes involved in prosody comprehension. More specifically, we examine the categorization of utterances based on the integration of dynamic acoustic information with methods from psychophysics and electrophysiology. By clarifying how listeners deal (or fail to deal) with acoustic information carried by the tone of voice, we aim at better understanding a crucial human ability, that is, the communication of emotions/intentions through speech.