Brain Science: More Than Just Child’s Play?
How do humans orient themselves in their environment? What happens in our brains when our eyes are presented with more information than they can apprehend? Researchers from the Max Planck Institute for Empirical Aesthetics, Princeton University, and the Stevens Institute of Technology have published a large-scale study in which they shed unprecedented light on these questions.
An essential function of the human visual system is to locate objects in space and navigate visual scenes. However, we are unable to perceive—and register—all details of the world around us. For this reason, the visual system combines incomplete sensory information with prior experience, allowing the brain to draw conclusions about a given environment.
In order to better understand this process, the international research team developed an experimental paradigm that reveals the structure of spatial memory representations in the brain in revolutionary detail. More than 9,000 people took part in their online experiments, the findings of which were recently published in the Proceedings of the National Academy of Sciences (PNAS).
The structure of the researchers’ experiment resembled the classic game of “broken telephone,” but using images instead of words. The participants were organized in virtual sequences and given a simple memory task: to remember precise point locations within images. Each person’s answer formed the stimulus for the next person, and multiple iterations of this process revealed the detailed picture of shared spatial memory representations. Nori Jacoby of the Max Planck Institute for Empirical Aesthetics explains:
“Contrary to earlier assumptions, our results show that memory is organized systematically around clear spatial landmarks, such as the vertices of objects. Until now, it was commonly held that spatial memory was oriented toward the centers of specific visual regions—for example, the centers of mass inside the boundaries of objects.”
Beyond spatial memory, the work highlights the promise of using a novel experimental procedure. By arranging participants on crowdsourcing platforms into virtual sequences, the researchers were able to exploit the properties of the telephone game process on a large scale. The study’s results show that this approach to estimating shared perceptual representations in vision can spur novel theoretical insights in potentially any visual memory modality.
Original Publication:
Thomas A. Langlois, Nori Jacoby, Jordan Suchow and Thomas L. Griffiths (2021): Serial Reproduction Reveals the Geometry of Visuospatial Representations. Proceedings of the National Academy of Sciences (PNAS) 118(13), e2012938118.
doi: 10.1073/pnas.2012938118
Contact:
Nori Jacoby