Freitag 09.05.2025 15:00 — 17:00
Online Event

Talk by Taelin Karidi

Abstract: 

The broad application of Large Language Models (LLMs) is hindered by their opaque, black-box nature, which frustrates attempts to understand how they encode and represent knowledge. This talk will explore geometric approaches to addressing these challenges. We will discuss how different word senses are encoded within LLMs’ hidden representations and how earlier network layers can be leveraged for both downstream applications and interpretability purposes. Additionally, we will extend the discussion to a cross-lingual perspective, and show how distributional methods can help tackle long standing fundamental questions in cognitive science, such as how meaning varies across languages (e.g, do English green and French vert convey the same meaning?). 
As part of this discussion, we will also consider the distinction between global alignment methods, which align representations across different spaces, and local methods, which operate within a single space to uncover more fine-grained patterns.
This research also has broader implications, from cross-linguistic knowledge transfer to multicultural NLP, including the detection of cultural knowledge and cultural biases in LLMs.

Zoom Link