Ellie Pavlick is an Associate Professor of Computer Science and Linguistics at Brown University and a Research Scientist at Google. She received her PhD from University of Pennsylvania in 2017, where her focus was on paraphrasing and lexical semantics. Her work focuses on computational models of language (currently, primarily LLMs) and its connections to the study of language and cognition more broadly. Ellie leads the language understanding and representation (LUNAR) lab, which collaborates with Brown’s Robotics and Visual Computing labs and with the Department of Cognitive, Linguistic, and Psychological Sciences. She is currently a Mercator Fellow in the RTG "Neuroexplicit Models" in the Saarland Informatics Campus
AG 1, AG 2, AG 3, INET, AG 4, AG 5, D6, SWS, RG1, MMCI
Large language models (LLMs) currently dominate AI. They exhibit impressive, often human-like, behavior across a wide range of linguistic and non-linguistic tasks. However, LLMs are the result of a combination of clever engineering and financial investment, rather than a result of rigorous scientific or theoretical study. Thus, very little is known about how or why LLMs exhibit the behaviors that they do. As a result, LLMs often seem mysterious and unpredictable. In this talk, I will discuss recent work which seeks to characterize the mechanisms that LLMs employ in terms of higher-level data structures and algorithms. Using results on a series of simple tasks, I will argue that LLMs are not as inscrutable as they seem, but rather make use of simple and often modular computational components to solve complex problems.