Many recent theories of perception and cognition suggest that the brain uses internal models of the world to predict forthcoming events. There exists compelling evidence from a wide range of studies that prediction occurs during language comprehension and listening to music, as well. A successful system of this type needs to predict the content of future events (‘what’) but also event timing (‘when’). Across perceptual and cognitive studies more broadly, these research questions are often addressed in the context of a Bayesian framework. In language research, predictive psycholinguistic models such as analysis-by-synthesis provide useful heuristic models to frame the architecture. The experimental research we pursue to address predictive mechanisms draws on a variety of phenomena across many domains in order to investigate what elementary representations and operations are employed in predicting the (near) future.
A central yet unanswered question in neuroscience concerns the cortical mechanisms by which the brain predictively controls perception and higher-level cognitive functions, e.g., language. My studies investigate how predictions about upcoming stimuli are implemented in brain circuits at different spatial scales, via which mechanisms sensory predictions aid perception and higher-level cognitive function, and how the brain’s predictive machinery may be utilized to systematically improve sensory and memory functions.
Attention to sensory stimuli is never uniformly distributed. We tested whether time-based and feature-based aspects of sensory attention interact in facilitating the detection of new stimuli in a stream. We recorded behavioural and encephalographic (EEG) data while participants attended to repeating pure tones (standard tones) which unpredictably changed in feature (deviant tones). Participants responded more rapidly to deviant tones longer waited for (time or Hazard rate effect), as well as to deviant events carrying larger rather than smaller deviancy magnitudes (feature effect).