The Science of Awe: Evolution, Culture, and Self-Transcendence
Abstract
Episodic memory, the ability to recollect past events, has been the focus of extensive study in psychology and cognitive neuroscience. Almost all of the research on this topic has focused on the ability to recall or recognize specific items from a list, and this work has identified the hippocampus as a critical region for episodic memory. Real life episodes, however, unfold over a long timescale, and they have a complex structure that is built on knowledge about past events. We have recently proposed a computational framework to understand the roles of different cortico-hippocampal networks in the generation of event models, event segmentation, and episodic memory. In this talk, I will describe new research aimed at understanding the roles of particular brain networks in representing information about people, places, and situations during the experience of complex events and later, when this information is recalled. I will also describe evidence suggesting how event segmentation and memory for complex events is affected by aging. Collectively, this work highlights the overlap between the neural circuitry that supports episodic memory and the ability to form event models that enable comprehension of film, prose, and spoken conversation. In this talk, I will ask whether the human brain's underlying computations are similar or different from the underlying computations in deep neural networks. The ability to think and reason using natural language separates us from other animals and machines. In the talk, I will focus on the underlying neural process that supports natural language processing and language development in children. Our study aims to model natural language processing in the wild. I will provide evidence that our neural code shares some computational principles with deep language models. This indicates that, to some extent, the brain relies on overparameterized optimization methods to comprehend and produce language. At the same time, I will present evidence that the brain differs from deep language models as speakers try to convey new ideas and thoughts. Together, our findings expose some unexpected similarities to deep neural networks while pointing to crucial human-centric missing properties in these machines.