Many recent theories of perception and cognition suggest that the brain uses internal models of the world to predict forthcoming events. There exists compelling evidence from a wide range of studies that prediction occurs during language comprehension and listening to music, as well. A successful system of this type needs to predict the content of future events (‘what’) but also event timing (‘when’). Across perceptual and cognitive studies more broadly, these research questions are often addressed in the context of a Bayesian framework. In language research, predictive psycholinguistic models such as analysis-by-synthesis provide useful heuristic models to frame the architecture. The experimental research we pursue to address predictive mechanisms draws on a variety of phenomena across many domains in order to investigate what elementary representations and operations are employed in predicting the (near) future.
A central yet unanswered question in neuroscience concerns the cortical mechanisms by which the brain predictively controls perception and higher-level cognitive functions, e.g., language. My studies investigate how predictions about upcoming stimuli are implemented in brain circuits at different spatial scales, via which mechanisms sensory predictions aid perception and higher-level cognitive function, and how the brain’s predictive machinery may be utilized to systematically improve sensory and memory functions.
Our brain not only processes incoming information from the environment, but also continuously predicts upcoming events. These predictions can be based on different representational structures in the brain: some originate from abstract models of the world (schemas), while others rely on unique autobiographical experiences. Can we differentiate these predictions based on their neurobiological underpinnings and algorithmic nature?
Musicians are often said to possess "good timing". Technically, this time-based behavior can be defined as appropriate action based on accurate prediction. Or, phrased differently: musicians know when to play because they estimate whenevents will occur in the music. This ubiquitous process is of importance in such diverse domains as sports, dancing, or maneuvering urban traffic.
Our environment contains a rich spectrum of physical signals carrying different types of information. These signals can artificially be assigned to three entities: the What, relating to the type of event, the Where, pertaining to an object's position in space, and the When, giving the event a position in time.
Attention to sensory stimuli is never uniformly distributed. We tested whether time-based and feature-based aspects of sensory attention interact in facilitating the detection of new stimuli in a stream. We recorded behavioural and encephalographic (EEG) data while participants attended to repeating pure tones (standard tones) which unpredictably changed in feature (deviant tones). Participants responded more rapidly to deviant tones longer waited for (time or Hazard rate effect), as well as to deviant events carrying larger rather than smaller deviancy magnitudes (feature effect).
Advancements in science necessitate theories, to make sense of observations and to predict new observations, but also sensitive methods to enable observations at the right scale. As we need a telescope to look at the galaxy far away, we need methods to record and perturb the human brain at multiple scales from the level of networks, areas, cortical layers, columns, and single units. Our lab aims to advance methods for human neuroscience across these scales.
In a series of auditory experiments using the roving standard paradigm, I manipulate both time-based (when) and feature-based (what) aspects of prediction to determine if and how they facilitate behavior. Electroencephalographic (EEG) data is concurrently recorded to verify if high or low frequencies differentially encode predictions.