The perception of dynamically changing signals, the very basis of listening to language or music, or seeing naturalistic visual scenes, requires an analysis of the temporal information that forms (part of) the basis of such signals. What are the temporal primitives that underlie their perceptual analysis? How is incoming information temporally “sampled”? What type of temporal information is necessary to experience, say, rhythm, or syllable duration, or temporal intervals, or change in a sequence? Previous research suggests that human perceptual systems may optimize their processing by operating within specific temporal ranges, instead of in a unitary way across a continuum of temporal variation. We aim to identify the temporal building blocks that underlie perception and cognition, from the perspective that these building blocks are part of the operating system of the mind/brain that is provided by neurophysiological principles.
Humans rely on the analysis of sensory input as an important source of knowledge about the world. Especially predicting the ‚when‘ of future events is crucial for successful interaction with our environment. How does the brain exploit the temporal structure of sensory information to predict upcoming events?
In this project we test the influence of rhythmic speech production on speech perception. Auditory perception has been shown to utilize temporal predictions from the motor system to increase its performance (Arnal, & Giraud, 2012; Merchant, & Yarrow, 2016).
For thousands of years people have admired the beauty of birdsong. But scientific understanding of this phenomenon is still restricted to studying learning and production mechanisms, as well as ecological function (how songs are used to attract mates and to defend territories).
To create a continuous conscious percept, our brain is concurrently carrying out multiple tasks: sampling sensory information, predicting upcoming events, storing long-term memories, maintaining and reactivating information from the past, etc.