Erin Grant is a Senior Research Fellow at the Gatsby Computational Neuroscience Unit and the Sainsbury Wellcome Centre at University College London. Erin studies learning and generalization in minds, brains, and machines using a combination of behavioral experiments, computational simulations, and analytical techniques, with the goal of grounding higher-level cognitive phenomena in a neural implementation. Erin earned her Ph.D. from the University of California, Berkeley with support from Canada’s Natural Sciences and Engineering Research Council. During her Ph.D., Erin spent time at OpenAI, Google Brain, and DeepMind.
AG 1, AG 2, AG 3, INET, AG 4, AG 5, D6, SWS, RG1, MMCI
Neural networks optimized with generic learning objectives acquire representations that support remarkable behavioural flexibility—from learning from few examples to analogical reasoning—previously seen as uniquely human. While these artificial learning systems simulate how cognitive capacities can emerge through experience, these capacities arise from complex interactions between architecture, learning algorithm, and training data that we struggle to interpret and validate, limiting the value of neural networks as scientific models of cognition. My research addresses this epistemic challenge by connecting high-level computational properties of neural systems to their low-level mechanistic details, making these systems more interpretable and manipulable for science and practice alike. I will present two case studies demonstrating this approach: how meta-learning in neural networks can be reinterpreted through the lens of hierarchical Bayesian inference, and how sparse representations can emerge naturally through the dynamics of learning in neural networks. Through these examples, I'll illustrate how interpreting and analyzing neural networks sheds light on their emergent computational properties, laying the groundwork for a more productive account of how cognitive capacities arise in neural systems.