Friday 08.12.2023 10:00 — 11:30
Max Planck Institute for Empirical Aesthetics, ArtLab Foyer

Mindvoyage l Uri Hasson : "Deep Language Models as a Cognitive Model for Natural Language Processing in the Human Brain"

The Mindvoyage lectures feature prominent scholars from different disciplines including the humanities, biology, neuroscience and physics. On Thursday, December 8, 10:00 CET, the Mindvoyage lecture series features Professor Uri Hasson, Department of Psychology and the Neuroscience Institute at Princeton University.

The lectures are presented by Lucia Melloni, Research Group Leader, Research Group Neural Circuits, Consciousness, and Cognition on behalf of the ARC-COGITATE Consortium.


This is a FREE event, registration is required. To register for the Zoom webinar, please register here


Naturalistic experimental paradigms in cognitive neuroscience arose from a pressure to test, in real-world contexts, the validity of models we derive from highly controlled laboratory experiments. In many cases, however, such efforts led to the realization that models (i.e., explanatory principles) developed under particular experimental manipulations fail to capture many aspects of reality (variance) in the real world. Recent advances in artificial neural networks provide an alternative computational framework for modeling cognition in natural contexts. In contrast to the simplified and interpretable hypotheses we test in the lab, these models learn how to act in the world from massive amounts of real-world examples (big data) by optimizing big models with millions to billions of parameters. Surprisingly, such models' performance matches human performances on many cognitive tasks, including visual perceptions, language processing, and motor control. At the same time, these models sacrifice understanding in favor of competence by being able to act without knowing why their choices are optimal or preferable in a given context.

In this talk, I will ask whether the human brain's underlying computations are similar or different from the underlying computations in deep neural networks. The ability to think and reason using natural language separates us from other animals and machines. In the talk, I will focus on the underlying neural process that supports natural language processing and language development in children. Our study aims to model natural language processing in the wild. I will provide evidence that our neural code shares some computational principles with deep language models. This indicates that, to some extent, the brain relies on overparameterized optimization methods to comprehend and produce language. At the same time, I will present evidence that the brain differs from deep language models as speakers try to convey new ideas and thoughts. Together, our findings expose some unexpected similarities to deep neural networks while pointing to crucial human-centric missing properties in these machines.

The online event will be held on Zoom. Pleaso note the Data Protection Information Regarding Zoom Webinars.