Efficient compression and linguistic meaning in humans and machines
Speaker: Noga Zaslavsky
Zoom Link: https://us02web.zoom.us/j/89968217623
Optional reading: https://arxiv.org/pdf/2005.06641.pdf
Abstract: In this talk, I will argue that efficient compression may provide a fundamental principle underlying the human capacity to communicate and reason about meaning, and may help to inform machines with similar linguistic abilities. In the first part of the talk I will address this at the population-level, showing that pressure for efficient compression may drive the evolution of word meanings across languages and may give rise to human-like semantic representations in artificial neural networks trained for vision. Specifically, I will argue that languages compress meanings into words by optimizing the Information Bottleneck (IB) tradeoff between the complexity and accuracy of the lexicon, and will support this idea by evidence from several semantic domains, including names for colors, containers, and animals. In the second part of the talk I will address my general proposal at the agent-level, where local context-dependent interactions influence the meaning of utterances. I will show that efficient compression may give rise to human pragmatic reasoning in reference games, suggesting a novel and principled approach to informing machine learning systems with pragmatic skills. Taken together, these findings suggest that efficient compression may be a major force shaping the structure and evolution of human language.