How do musicians attend to own and the co-performer’s sound during joint performance?

Ensemble music making requires musicians to attend to their own and their co-performers’ parts. Although it has been repeatedly shown that dividing attention worsens the task execution – musicians can create performances of excellent quality even if they need to attend to dozens of other players (e.g., in an orchestra). How is this possible?

It has been suggested that musicians use a special attending mode during joint performance. Instead of dividing attention between all independent parts, musicians are attending to one (typically their own) part with high priority, to other parts with lower priority and integrate the whole outcome, unless the situation demands otherwise. This strategy helps saving attentional resources in two ways: (1) by allocating less resources to low priority parts, and (2) by integrating all parts which decreases the number of sound streams to be attended. In the current study, we investigate how natural attentional prioritization looks like during joint music making and how it is influenced by changing performance qualities – familiarity with the partner’s part and synchrony of performance.

To this aim, we recorded EEG from fourteen piano duos while they were playing simple pieces together. We allowed participants to practice their partner’s part prior to the experiment only in half of the pieces. Furthermore, we manipulated their performance synchrony using congruent or incongruent tempo instructions. To understand how attentional resources are distributed during joint play, we analysed to what extent the pianists’ neural signals encoded their own and their partner’s sounds: the stronger the encoding, the stronger the focus on this particular performer.

Our results suggest that the allocation of a player’s attention is indeed modulated by the performance qualities. Both one’s own and the partner’s sounds were encoded more strongly in well synchronized performance, probably because their integration is easier than for temporally incompatible parts. Furthermore, players encoded their partner’s sounds more strongly when they were unfamiliar with this part. This prioritized focus on the partner is likely due to the need to acquire the information about the partner’s play in order to create smooth synchronous performance. The current study is an important step in the investigation of attention management as it moves traditional research on the identification of attended sound streams in multi-voice recordings to a more social, interactive setting.

 

back