Today we’re joined by Sanmi Koyejo, assistant professor at Stanford University, to continue our NeurIPS 2024 series. In our conversation, Sanmi discusses his two recent award-winning papers. First, we dive into his paper, “Are Emergent Abilities of Large Language Models a Mirage?” We discuss the different ways LLMs are evaluated and the excitement surrounding their“emergent abilities” such as the ability to perform arithmetic Sanmi describes how evaluating model performance using nonlinear metrics can lead to the illusion that the model is rapidly gaining new capabilities, whereas linear metrics show smooth improvement as expected, casting doubt on the significance of emergence. We continue on to his next paper, “DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models,” discussing the methodology it describes for evaluating concerns such as the toxicity, privacy, fairness, and robustness of LLMs.
The complete show notes for this episode can be found at twimlai.com/go/671.
Advancing Hands-On Machine Learning Education with Sebastian Raschka - #565
Big Science and Embodied Learning at Hugging Face 🤗 with Thomas Wolf - #564
Full-Stack AI Systems Development with Murali Akula - #563
100x Improvements in Deep Learning Performance with Sparsity, w/ Subutai Ahmad - #562
Scaling BERT and GPT for Financial Services with Jennifer Glore - #561
Trends in Deep Reinforcement Learning with Kamyar Azizzadenesheli - #560
Deep Reinforcement Learning at the Edge of the Statistical Precipice with Rishabh Agarwal - #559
Designing New Energy Materials with Machine Learning with Rafael Gomez-Bombarelli - #558
Differentiable Programming for Oceanography with Patrick Heimbach - #557
Trends in Machine Learning & Deep Learning with Zachary Lipton - #556
Solving the Cocktail Party Problem with Machine Learning, w/ Jonathan Le Roux - #555
Machine Learning for Earthquake Seismology with Karianne Bergen - #554
The New DBfication of ML/AI with Arun Kumar - #553
Building Public Interest Technology with Meredith Broussard - #552
A Universal Law of Robustness via Isoperimetry with Sebastien Bubeck - #551
Trends in NLP with John Bohannon - #550
Trends in Computer Vision with Georgia Gkioxari - #549
Kids Run the Darndest Experiments: Causal Learning in Children with Alison Gopnik - #548
Hypergraphs, Simplicial Complexes and Graph Representations of Complex Systems with Tina Eliassi-Rad - #547
Deep Learning, Transformers, and the Consequences of Scale with Oriol Vinyals - #546
Create your
podcast in
minutes
It is Free
20/20
The Dropout
Ten Percent Happier with Dan Harris
World News Tonight with David Muir
NEJM This Week