Although we intuitively know if someone is speaking or singing, the neuronal mechanisms that drive this experience are not well understood. Whether we perceive auditory sequences as speech or song is associated with certain acoustic features (Merrill, & Larrouy-Maestri, 2017). The repetition of auditory sequences seems also relevant to perceive material as spoken or sung, as shown in the well-known speech-to-song illusion (Deutsch, Henthorn, & Lapidis, 2011; Simchy-Gross, Margulis, 2018). This project focuses on temporal structure, by identifying rhythmic features that drive our perception of auditory sequences. More specifically, we investigate neuronal mechanisms involved in auditory processing of sequences perceived as speech or song.