Neurophysiological tracking of music

Collaboration between Xiangbin Teng and Pauline Larrouy-Maestri

Music, like speech, can be considered as a continuous stream of sounds organized in hierarchical structures. Human listeners parse continuous speech into linguistic units of phrases and sentences. Inspired by EEG and MEG studies on speech parsing, this project examines the neural signatures of the parsing of musical structures. As a first step, we focus on phrases that are harmonically driven. We proposed to 25 participants to listen to Bach chorales while undergoing EEG recording. Eleven selected pieces were manipulated that the salience of musical structures is more and more limited and synthesized at three different tempi (66, 75, and 85 bpm). Employing advanced EEG component analysis and machine learning techniques, we observed that listeners were able to rely on harmonic structures alone to identify the beginning of each phrase and hence to parse continuous music streams. Also, the robustness of neural tracking to music phrases was positively correlated with participants’ musical training. This project already demonstrated that the brain extracts musical structures online and segments continuous music streams into units with ‘musical’ meanings. Our next step consists in examining the combination of tracking of units of different sizes to enlighten both music cognition processes and neurophysiological tracking of auditory sequences.