Today, we continue our NeurIPS series with Dan Friedman, a PhD student in the Princeton NLP group. In our conversation, we explore his research on mechanistic interpretability for transformer models, specifically his paper, Learning Transformer Programs. The LTP paper proposes modifications to the transformer architecture which allow transformer models to be easily converted into human-readable programs, making them inherently interpretable. In our conversation, we compare the approach proposed by this research with prior approaches to understanding the models and their shortcomings. We also dig into the approach’s function and scale limitations and constraints.
The complete show notes for this episode can be found at twimlai.com/go/667.
Advancing Hands-On Machine Learning Education with Sebastian Raschka - #565
Big Science and Embodied Learning at Hugging Face 🤗 with Thomas Wolf - #564
Full-Stack AI Systems Development with Murali Akula - #563
100x Improvements in Deep Learning Performance with Sparsity, w/ Subutai Ahmad - #562
Scaling BERT and GPT for Financial Services with Jennifer Glore - #561
Trends in Deep Reinforcement Learning with Kamyar Azizzadenesheli - #560
Deep Reinforcement Learning at the Edge of the Statistical Precipice with Rishabh Agarwal - #559
Designing New Energy Materials with Machine Learning with Rafael Gomez-Bombarelli - #558
Differentiable Programming for Oceanography with Patrick Heimbach - #557
Trends in Machine Learning & Deep Learning with Zachary Lipton - #556
Solving the Cocktail Party Problem with Machine Learning, w/ Jonathan Le Roux - #555
Machine Learning for Earthquake Seismology with Karianne Bergen - #554
The New DBfication of ML/AI with Arun Kumar - #553
Building Public Interest Technology with Meredith Broussard - #552
A Universal Law of Robustness via Isoperimetry with Sebastien Bubeck - #551
Trends in NLP with John Bohannon - #550
Trends in Computer Vision with Georgia Gkioxari - #549
Kids Run the Darndest Experiments: Causal Learning in Children with Alison Gopnik - #548
Hypergraphs, Simplicial Complexes and Graph Representations of Complex Systems with Tina Eliassi-Rad - #547
Deep Learning, Transformers, and the Consequences of Scale with Oriol Vinyals - #546
Create your
podcast in
minutes
It is Free
20/20
The Dropout
Ten Percent Happier with Dan Harris
World News Tonight with David Muir
NEJM This Week