The perception of dynamically changing signals, the very basis of listening to language or music, or seeing naturalistic visual scenes, requires an analysis of the temporal information that forms (part of) the basis of such signals. What are the temporal primitives that underlie their perceptual analysis? How is incoming information temporally “sampled”? What type of temporal information is necessary to experience, say, rhythm, or syllable duration, or temporal intervals, or change in a sequence? Previous research suggests that human perceptual systems may optimize their processing by operating within specific temporal ranges, instead of in a unitary way across a continuum of temporal variation. We aim to identify the temporal building blocks that underlie perception and cognition, from the perspective that these building blocks are part of the operating system of the mind/brain that is provided by neurophysiological principles.
Humans rely on the analysis of sensory input as an important source of knowledge about the world. Especially predicting the ‚when‘ of future events is crucial for successful interaction with our environment. How does the brain exploit the temporal structure of sensory information to predict upcoming events?
In this project we test the influence of rhythmic speech production on speech perception. Auditory perception has been shown to utilize temporal predictions from the motor system to increase its performance (Arnal, & Giraud, 2012; Merchant, & Yarrow, 2016).
Audition involves a deep temporal hierarchy of transformations supporting capacities such as speech and music perception. How are representations structured throughout the auditory system?
The body is our primary interface with the world: it allows us to gather inputs from the outside, to build a representation of the world, to act and directly manipulate the environment. However, we do not perceive our body accurately.
Remembering the order of events is critical for everyday functioning. For instance, during a traffic accident it is important to know and to remember whether the traffic light turned from red to green or from green to red. Our ability to track temporal order generally declines with age, and is impaired in patients with neurodegenerative diseases, such as Alzheimer disease. What brain mechanisms support the capacity to encode temporal order?
The processing of temporal information ranging from tens to hundreds of milliseconds is essential for surviving and for daily behaviors such as speech perception, music appreciation, dancing, performing sports, driving a car, etc. In this project my goal is to understand how the central nervous system tracks time. By using a combination of Electroencephalography, Psychophysics, and Eye-Tracking I aim at gaining new insights on how the human brain perceives time.
For thousands of years people have admired the beauty of birdsong. But scientific understanding of this phenomenon is still restricted to studying learning and production mechanisms, as well as ecological function (how songs are used to attract mates and to defend territories).
To create a continuous conscious percept, our brain is concurrently carrying out multiple tasks: sampling sensory information, predicting upcoming events, storing long-term memories, maintaining and reactivating information from the past, etc.
Visual perception is subjective and varies across individuals e.g., the physical size of an object is perceived differently across subjects. Where does this variability come from? We are investigating the role that short-range structural and functional connections plays in our subjective experience of space and perceived size.
We do not perceive our body, and in particular our hands, accurately i.e., hands are distorted in their width and length. This phenomenon is observed in healthy individuals as well as in neurological and psychiatric disorders. What explains those distortions in the body representation?
Humans are able to spontaneously and rapidly extract information about the temporal structure of event sequences (Maheu et al. 2019). Statistical learning is one mechanism by which the brain is able to segment structured sequences, such as continuous speech, into meaningful units – even when the only cues to locating word boundaries lie in the transitional probabilities between individual syllables (Saffran et al. 1996, Aslin et al. 1998). We set out to investigate the perceptual consequences of this automatic and implicit segmentation process.