Auditory multi-scale Processing
Natural sounds, music, and vocal sounds have a rich temporal structure over multiple timescales, and behaviorally relevant acoustic information is usually carried on more than one timescale. For example, speech conveys linguistic information at several scales: 20-80 ms for phonemic information, 100-300 ms for syllabic information, and more than 1000 ms for intonation information. Therefore, successful perceptual analysis of auditory signals requires the auditory system to extract acoustic information at multiple scales.This presents a specific problem: how to process different, co-occurring rates of information across multiple timescales? And, by extension, how can the requirements on temporal and spectral resolution simultaneously be met? In this project, we investigate whether these distinct timescales could be illuminated, especially in terms of their neural implementation, by considering cortical oscillations.