Gibbs Sampling with People

As cognitive scientists, we are often interested in mapping the relationship between external stimuli (e.g., spoken sentences, musical chords, faces) and semantic features that the mind derives from these stimuli (e.g., happiness, sadness, pleasantness). Traditional methods for studying such relationships (e.g., non-adaptive rating experiments) work well when the stimulus spaces are low-dimensional and the semantic features are simple, but struggle when applied to more complex cognitive domains. We have developed a new technique, termed ‘Gibbs Sampling with People’, designed to overcome this problem. This is an adaptive technique, inspired by the Gibbs Sampling method from computational statistics and machine learning, where many participants collaborate to navigate a stimulus space and identify regions associated with a given semantic concept, for example ‘pleasantness. In each trial, the participant is presented with a stimulus and a slider, where the slider is coupled to a particular dimension of the stimulus space that changes from trial to trial. The participant is instructed to move the slider to find the stimulus most associated with the target semantic concept. The resulting stimulus is then passed along the chain of participants, with each successive participant optimizing a different dimension of the stimulus. Under our cognitive decision model, the emergent process corresponds to a Gibbs sampler that maps the relationship between the stimulus space and the target semantic concept. In a recent paper (Harrison et al., 2020, Thirty-fourth Conference on Neural Information Processing Systems), we demonstrate this technique in four different domains that vary substantially in complexity: colors, musical chords, emotional prosody, and faces. We show that the method generalizes well to these different domains, generating interpretable and interesting results. Further, we show that the method can be coupled with state-of-the-art deep neural synthesis models to study the perception of very high-dimensional stimulus spaces, such as natural images. In the coming months, we are excited to apply these techniques to a variety of domains, with a particular focus on cross-cultural studies investigating how cultural experience shapes the semantics of perception.

Collaborators

Pauline Larrouy-Maestri, Pol van Rijn

 

 

 

 

 

 

Sound examples

Sound examples for emotional prosody

Sound examples for musical triads

More materials, code, and other information coming soon!

 

 

Mitarbeiter

Peter Harrison

Dr. Peter Harrison

Forschungsgruppe Computational Auditory Perception

Gastwissenschaftler

E-Mail

Raja Marjieh

Raja Marjieh

Forschungsgruppe Computational Auditory Perception

Gastwissenschaftler

E-Mail

Manuel Anglada-Tort

Dr. Manuel Anglada-Tort

Forschungsgruppe Computational Auditory Perception

Wissenschaftlicher Mitarbeiter

+49 69 8300479-822

E-Mail

Ofer Tchernichovski

Ofer Tchernichovski

Forschungsgruppe Computational Auditory Perception

Gastwissenschaftler

E-Mail

Nori Jacoby

Dr. Nori Jacoby

Forschungsgruppe Computational Auditory Perception

Forschungsgruppenleiter

+49 69 8300479-820

E-Mail