Are you listening?

Speech, music, and many natural sounds have a rich temporal structure over multiple timescales. Listening and understanding requires integrating the acoustic information over time periods of various lengths to extract relevant regularities, for example, an intonation change over a word that signals intent. A fundamental question about this process is: How does the auditory system process information from different, co-occurring sounds over multiple timescales?

Xiangbin Teng from the Neuroscience Department at the Max Planck Institute for Empirical Aesthetics delivers new results regarding auditory processing. In his recent magnetoencephalography study, he and his colleagues found that the auditory system does not treat all sound inputs equally, but predominantly uses a 2-timescale processing mode, which segregates higher and lower timescales - in order to make the perception of natural sounds feel seamless for us.

The study was published in PLOS Biology and is available as an Open Access publication at the link below.

 

Original Publication:

Teng X, Tian X, Rowland J, Poeppel D (2017) Concurrent temporal channels for auditory processing: Oscillatory neural entrainment reveals segregation of function at different scales. PLoS Biol15(11): e2000812. doi.org/10.1371/journal.pbio.2000812

 

Contact:

Xiangbin Teng

Department of Neuroscience

Max Planck Institute for Empirical Aesthetics, Frankfurt/Main

Phone +49 69 8300479-329

 

Marilena Hoff

Press and PR

Max Planck Institute for Empirical Aesthetics, Frankfurt/Main

Phone +49 69 8300479-665