Today we’re joined by Sanmi Koyejo, assistant professor at Stanford University, to continue our NeurIPS 2024 series. In our conversation, Sanmi discusses his two recent award-winning papers. First, we dive into his paper, “Are Emergent Abilities of Large Language Models a Mirage?” We discuss the different ways LLMs are evaluated and the excitement surrounding their“emergent abilities” such as the ability to perform arithmetic Sanmi describes how evaluating model performance using nonlinear metrics can lead to the illusion that the model is rapidly gaining new capabilities, whereas linear metrics show smooth improvement as expected, casting doubt on the significance of emergence. We continue on to his next paper, “DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models,” discussing the methodology it describes for evaluating concerns such as the toxicity, privacy, fairness, and robustness of LLMs.
The complete show notes for this episode can be found at twimlai.com/go/671.
Deep Learning is Eating 5G. Here’s How, w/ Joseph Soriaga - #525
Modeling Human Cognition with RNNs and Curriculum Learning, w/ Kanaka Rajan - #524
Do You Dare Run Your ML Experiments in Production? with Ville Tuulos - #523
Delivering Neural Speech Services at Scale with Li Jiang - #522
AI’s Legal and Ethical Implications with Sandra Wachter - #521
Compositional ML and the Future of Software Development with Dillon Erb - #520
Generating SQL Database Queries from Natural Language with Yanshuai Cao - #519
Social Commonsense Reasoning with Yejin Choi - #518
Deep Reinforcement Learning for Game Testing at EA with Konrad Tollmar - #517
Exploring AI 2041 with Kai-Fu Lee - #516
Advancing Robotic Brains and Bodies with Daniela Rus - #515
Neural Synthesis of Binaural Speech From Mono Audio with Alexander Richard - #514
Using Brain Imaging to Improve Neural Networks with Alona Fyshe - #513
Adaptivity in Machine Learning with Samory Kpotufe - #512
A Social Scientist’s Perspective on AI with Eric Rice - #511
Applications of Variational Autoencoders and Bayesian Optimization with José Miguel Hernández Lobato - #510
Codex, OpenAI’s Automated Code Generation API with Greg Brockman - #509
Spatiotemporal Data Analysis with Rose Yu - #508
Parallelism and Acceleration for Large Language Models with Bryan Catanzaro - #507
Applying the Causal Roadmap to Optimal Dynamic Treatment Rules with Lina Montoya - #506
Create your
podcast in
minutes
It is Free
20/20
The Dropout
Ten Percent Happier with Dan Harris
World News Tonight with David Muir
NEJM This Week